Models of binary dependent variables often are estimated using logistic regression or probit models, but the estimated coefficients (or exponentiated coefficients expressed as odds ratios) are often difficult to interpret from a practical standpoint. Empirical economic research often reports ‘marginal effects’, which are more intuitive but often more difficult to obtain from popular statistical software. The most straightforward way to obtain marginal effects is from estimation of linear probability models. This paper uses a toy data set to demonstrate the calculation of odds ratios and marginal effects from logistic regression using SAS and R, while comparing them to the results from a standard linear probability model.
Suppose we have a
data set that looks at program participation (for some program or
product or service of interest) by age and we want to know the influence
of age on the decision to participate. Our data may look something like
the excerpt below:
this might call for logistic regression for modeling a dichotomous
outcome like participant, so we could use SAS or R to get the following
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.92972 2.34258 2.531 0.0114 *
age -0.14099 0.05656 -2.493 0.0127 *
OR 2.5 % 97.5 %
(Intercept) 376.049897 6.2769262 7.864410e+04
age 0.868502 0.7641126 9.595017e-01
the estimated coefficients from logistic regression are not easily
interpretable (they represent the change in the log of odds of
participation for a given change in age), odds ratios might provide a
better summary of the effects of age on participation (odds ratios are
derived from exponentiation of the estimated coefficients from logistic
regression -see also: The Calculation and Interpretation of Odds Ratios)
and may be somewhat more meaningful. We can see the odds ratio
associated with age is .8685 which implies that for every year increase
in age the odds of participation are about (.8685-1)*100 = -13.15% or 13.5% less.
You tell me what this means if this is the way you think about the
likelihood of outcomes in everyday life!
Marginal effects are
an alternative metric that can be used to describe the impact of age on
participation. Marginal effects can be described as the change in
outcome as a function of the change in the treatment (or independent
variable of interest) holding all other variables in the model constant.
In linear regression, the estimated regression coefficients are
marginal effects and are more easily interpreted (more on this later).
Marginal effects can be output easily from STATA, however they are not
directly available in SAS or R. However there are some adhoc ways of
getting them which I will demonstrate here. (there are some packages in
R available to assist with this as well). I am basing most of this
directly on two very good blog posts on the topic:
One approach is to use PROC QLIM and request output of marginal effects. This computes a marginal effect for each observation’s value of x in the data set (because marginal effects may not be constant across the range of explanatory variables). Taking the average of this result gives and estimated ‘sample average estimate of marginal effect’: -.0258
This tells us that for every year increase in age the probability of participation decreases on average by 2.5%. For most people, for practical purposes, this is probably a more useful interpretation of the relationship between age and participation compared to odds ratios. We can calculate this more directly (following the code from the blog post by WenSui Liu) using output from logistic regression and the data step in SAS. Basically for each observation in the data set calculate:
MARGIN_AGE = EXP(XB) / ((1 + EXP(XB)) ** 2) * (-0.1410);
Where -.1410 is the estimated coefficient on age from the original logistic regression model. We can run the same analysis in R, either replicating the results from the data step above, or using the mfx function defined by Alan Fernihough referenced in the diffuseprior blog post mentioned above or the paper referenced below.
The paper notes that this function gives similar results to the mfx function in STATA. And we get almost the same results we got from SAS above but additionally provides bootstrapped standard errors :
Marginal Effects from Linear Probability Models
Earlier I mentioned that you could estimate marginal effects directly from the estimated coefficients from a linear probability model. While in some circles LPMs are not viewed favorably, they have a strong following among applied econometricians (see references for more on this). As Angrist and Piscke state in their very popular book Mostly Harmless Econometrics:
"While a nonlinear model may fit the CEF (population conditional expectation function) for LDVs (limited dependent variables) more closely than a linear model, when it comes to marginal effects, this probably matters little"
Using SAS or R we can get the following results from estimating a LPM for this data:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.700260 0.378572 4.491 0.000111 ***
dat1$age -0.028699 0.009362 -3.065 0.004775 **
You can see that the estimate from the linear probability model above gives us a marginal effect (-.028699) almost identical to the previous estimates derived from logistic regression, as is often the case, and as indicated by Angrist and Pischke.
In the SAS ETS example cited in the references below, a distinction is made between calculating sample average marginal effects (which were discussed above) vs. calculating marginal effects at the mean:
“To evaluate the "average" or "overall" marginal effect, two approaches are frequently used. One approach is to compute the marginal effect at the sample means of the data. The other approach is to compute marginal effect at each observation and then to calculate the sample average of individual marginal effects to obtain the overall marginal effect. For large sample sizes, both the approaches yield similar results. However for smaller samples, averaging the individual marginal effects is preferred (Greene 1997, p. 876)”
For a step by step review of the SAS and R code presented above as well as an additional example with multiple variables see:
Matt Bogard. "Comparing Odds Ratios and Marginal Effects from Logistic Regression and Linear Probability Models" Staff Paper (2016)
Available at: http://works.bepress.com/matt_bogard/30/
Simple logit and probit marginal effects in R. https://ideas.repec.org/p/ucn/wpaper/201122.html
SAS/ETS Web Examples Computing Marginal Effects for Discrete Dependent Variable Models. http://support.sas.com/rnd/app/examples/ets/margeff/
Linear Regression and Analysis of Variance with a Binary Dependent Variable (from EconomicSense, by Matt Bogard).
Angrist, Joshua D. & Jörn-Steffen Pischke. Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press. NJ. 2008.
Probit better than LPM? http://www.mostlyharmlesseconometrics.com/2012/07/probit-better-than-lpm/
Love It or Logit. By Marc Bellemare. marcfbellemare.com/wordpress/9024
R Data Analysis Examples: Logit Regression. From http://www.ats.ucla.edu/stat/r/dae/logit.htm (accessed March 4,2016).
Greene, W. H. (1997), Econometric Analysis, Third edition, Prentice Hall, 339–350.