I've blogged in the past about what I call linear literalism and fundamentalist econometrics. And I've blogged a bit about linear probability models (LPMs). Recently I have had some concerns about people outsourcing their thinking to LLMs and the use of these tools like Dunning-Kruger-as-a-Service (DKaaS) where the critical thinking and actual learning starts and stops with prompt engineering and a response. Out of curiosity I asked ChatGPT about the appropriateness of using linear probability models. Although the overall response was thoughtful about thinking more carefully about causality, it still gave the canned 'thou shalt not' theoretically correct fundamentalist response. My prompt could have been more sophisticated, but I tried to prompt from a user's prospective, someone who may not be as familiar with applied statistics work, or who may have even read my blog and wanted to question something about the use of LPMs and may not be thinking about the tradeoffs or who may be unfamiliar with the social norms and practices related to their use. As has been noted before on this blog, in applied work, there is no consensus among practitioners that nonlinear models (like logistic regression) are 'better' than LMPs when estimating treatment effects. If anything this illustrates at best, a response from an LLM about applied econometric analysis could be just as good as having another expert in the room, but an experienced practitioner understands that experts often disagree, and that disagreement comes with a lot of nuance, and is often as much the result of social norms and practices as theory. Perhaps someone could take the fundamentalist response from this prompt and do their analysis and solve their problem and there is no harm at the end of the day. But there is danger in fundamentalism, if this leads them to ignore great work and potential learnings derived from LPMs, or prevents them from getting more actionable and interpretable results vs. stumbling through the mental gymnastics sometimes required to interpret things like odds ratios vs. marginal effects!
Note: As I refined my prompt I got slightly better answers, but not one that really captures the nuance I would like to have. Again....it's like arguing with an expert that knows the theory but may not be as familiar with all the nuances of different disciplines and the practical implications of the tradeoffs involved. I have to say - to give a response close to what I would expect from another practitioner that maybe outside the field of economics, is impressive!
While it did a great job articulating the pros and cons of LPMs vs logistic regression or other models, I think these responses would leave the casual reader with the impression that they should be wary of relying on LPMs to estimate treatment effects in most situations. And as noted above that could be an issue depending on the situation. Because I disagree with the tone of the response doesn't mean chatGPT is a bad tool, there are great textbooks and econometricians that I think would certainly endorse these responses! Its the missing nuance around these differences in opinion that concern me and while its not super critical in this situation, I'd be concerned about others.
And I have to disagree that some of the 'cons' listed about LPMs are really as big of an issue that a reader might think if they take this response very seriously. Most of the listed 'cons' can often be addressed easily and in my experience despite all the alleged 'cons' LPMs generally lead to the same decision you would make when properly interpreting and understanding results from a logistic regression, while providing all the very practical benefits you get from an LPM. The nuance is, the 'cons' in theory don't always have implications that matter in any practical sense. As Angrist and Pischke state: "there is a lot to be said for sticking to a linear regression function as compared to a fairly arbitrary choice of a non-linear one!"
Prompt: Is it OK to use a linear probability model to measure causal effects?
Using a linear probability model to measure causal effects can be problematic and is generally not recommended. A linear probability model (LPM) is a linear regression model where the dependent variable is a binary outcome, typically coded as 0 or 1. In this model, the predicted values can be outside the [0, 1] range, which can lead to biased estimates and incorrect interpretations.
There are several issues with using a linear probability model for causal effects:
1. Heteroscedasticity: The variance of the error term in the linear probability model is not constant across all values of the independent variables, violating one of the assumptions of linear regression. This can lead to inefficient estimates and incorrect standard errors.
2. Predicted probabilities outside [0, 1]: Since linear regression does not constrain the predicted values to be between 0 and 1, it can produce probabilities that fall outside this range, which is not meaningful for binary outcomes.
3. Non-linearity: The relationship between the independent variables and the probability of the binary outcome is often non-linear. Using a linear model may not capture this relationship accurately.
To address these issues, researchers typically use alternative methods for estimating causal effects with binary outcomes. The most common approaches are logistic regression and probit regression. These models are specifically designed for binary outcomes and provide estimates that are valid probabilities between 0 and 1.
Additionally, for estimating causal effects, researchers often use methods such as randomized controlled trials (RCTs) or quasi-experimental designs (e.g., difference-in-differences, matching methods, instrumental variables) to establish causality more convincingly.
In summary, while linear probability models might be used for exploratory analysis, they are generally not suitable for measuring causal effects with binary outcomes. Logistic regression or probit regression should be preferred when dealing with binary dependent variables, and careful consideration of study design and potential confounders is essential for establishing causality.
Love It or Logit, or: Man, People *Really* Care About Binary Dependent Variables. Marc Bellemare. http://marcfbellemare.com/wordpress/9024
Whether to probit or to probe it: in defense of the Linear Probability Model. World Bank Blogs. Jed Friedman. https://blogs.worldbank.org/impactevaluations/whether-to-probit-or-to-probe-it-in-defense-of-the-linear-probability-model