Sunday, April 6, 2025

Econometrics and Psychometrics: Rivers Out of Biometry.

Abstract

The period 1895–1925 saw the origins and establishment of the fields that came to be called econometrics and psychometrics. I consider what these fields owed to biometry—the statistical approach to the biological problems of evolution—and make some comparisons between all three. I emphasize developments in biology and psychology, for these are less familiar to historians of econometrics. These developments are interesting to contemplate, for the biometricians and psychometricians were already discussing issues associated with the respective roles of statistical analysis and of subject matter theory, issues that became prominent in econometrics only much later.

John Aldrich; Econometrics and Psychometrics: Rivers Out of Biometry. History of Political Economy 1 December 2011; 43 (suppl_1): 35–56. doi: https://doi.org/10.1215/00182702-1158790

Agricultural Economics as a Poster Child of Applied Economics

 Abstract

Agricultural economists have embodied the notions of applied economics for a long time. They have used economic principles to address real-world problems, integrating economics and scientific knowledge. Applied economics tends to be multidisciplinary and develop applied concepts, theories, and tools. Some, like human capital, diffusion of innovation, contingent valuation, and numerous numerical and econometric techniques have spread throughout economics. Agricultural economic research has been data intensive, and improved information technologies strengthen this tendency. Yet data without theory is of limited use and coevolution of theory and data are essential. Empirical analysis should incorporate quantitative information as well as narratives. We are challenged to understand the coevolution of business, supply chains, and technology, and how they are affected by policies and affect markets. Research should integrate agriculture, energy, and the environment and develop tools to analyze and regulate the emerging bio-economy integrating biotech and infotech.

Zilberman, D. (2019), Agricultural Economics as a Poster Child of Applied Economics: Big Data & Big Issues1. American Journal of Agricultural Economics, 101: 353-364. https://doi.org/10.1093/ajae/aay101

Saturday, July 29, 2023

If Applied Econometrics Were Easy, LLMs Could Do It!

Summary

Can AI do applied econometrics and causal inference? Can LLMs pick up on the nuances and social norms that dictate so many of the decisions made in applied work and reflect them in response to a prompt? LLMs bring to the table incredible capabilities and efficiencies and opportunities to create value. But there are risks when these tools are used like Dunning-Kruger-as-a-Service (DKaaS), where the critical thinking and actual learning begins and ends with prompt engineering and a response. We have to be very careful to recognize as Philip Tetlock describes in his book "Superforecasters" that there is a difference between mimicking and reflecting meaning vs. originating meaning.  To recognize that it’s not just what you know that matters, but how you know what you know. The second-handed tendency to believe that we can or should be outsourcing, nay, sacrificing our thinking to AI in exchange for misleading if not false promises about value, is philosophically and epistemically disturbing.

AI vs. Causal Thinking

This is a good article, from causal lens: Enterprise Decision Making Needs More Than Chatbots

"while LLMs are good at learning and extracting information from a corpus, they’re blind to something that humans do really well – which is to measure the impact of one’s decisions." 

In a recent talk Cassie Kozrykov puts it well: "AI does not automate thinking!"

   

Channelling Judea Pearl, understanding what makes a difference (causality)requires more than data, it also requires something not in the data to begin with. So much of the hype around AI is based on a tools and technology mindset. As Captain Jack Sparrow says about ships in Pirates of the Caribbean, a ship is more than sails and rudders, those are things a ship needs. What a ship really is, is freedom. Causal inference is more than methods and theorems, those are things causal inference needs, but what it really is, is a way of thinking. And in business, what is required is an alignment of thinking. For instance, in his article The Importance of Being Causal, Ivor Bojinov describes the Causal Data Analysis Review Committee at LinkedIn. It is a common best practice in learning organizations that leverage experimentation and causal inference. 

If you  attended very many of those reviews you begin to appreciate the amount of careful thinking required to understand the business problem, frame the hypothesis, and translate it to an analytical solution....then interpret the results and make a recommendation about what action to take next. Similarly a typical machine learning workflow requires up front thinking and problem framing. But unlike training an ML model, as Scott Lundberg describes (see my LI Post: Beyond SHAP Values and Crystal Balls), understanding what makes a difference is not just a matter of letting an algo figure out the best predictors  and calling it a day, there is an entire garden of forking paths to navigate and each turn requires more thinking and a vast difference in opinions among 'experts' about which direction to go.

As I discussed in a past post about forking paths in analysis

"even if all I am after is a single estimate of a given regression coefficient, multiple testing and researcher degrees of freedom may actually become quite a relevant concern...and this reveals the fragility in a lot of empirical work that prudence would require us to view with a critical eye"

Sure you could probably pair a LLM with statistical software and a data base connection and ask it to run a regression, but getting back to Jack Sparrow's ship analogy, a regression is more than just fitting a line to data and testing for heteroskedasticity and multicollinearity (lets hope if LLMs train on econometrics textbooks they don't weight the value of information by the amount of material dedicated to multicollinearity!!!) and the laundry list of textbook assumptions. AI could probably even describe in words a mechanical interpretation of the results. All of that is really cool, and something like that could save a lot of time and augment our workflows (which is valuable) but we also have to be careful about that tools mindset creeping back on us. All those things that AI may be able to do are only the things regression needs, but to get where we need to go, to understand why, we need way more than what AI can currently provide. We need thinking. So even for a basic regression, depending on our goals, the thinking required is currently and may always be beyond the capabilities of AI.

When we think about these forking paths encountered in applied work, each path can end with a different measure of impact that comes with a number of caveats and tradeoffs to think about. There are seldom standard problems with standard solutions. The course of action taken requires conscious decisions and the meeting of minds among different expert judgements (if not explicitly then implicitly) that considers all the tradeoffs involved in moving from what may be theoretically correct and what is practically feasible. 

In his book, "A Guide to Econometrics" Peter Kennedy states that "Applied econometricians are continually faced with awkward compromises" and offers a great story about what it's like to do applied work: 

"Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turns to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at exactly 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli he used shredded wheat; and he substitutes green garment die for curry, ping-pong balls for turtles eggs, and for Chalifougnac vintage 1883, a can of turpentine."

What choice would AI driven causal inference make when it has to make the awkward compromise between Chalifougnac vintage 1883 and turpentine and how would it explain the choice it made and the thinking that went into it? How would that choice stack up against the opinions of four other applied econometricians who would have chosen differently? 

As Richard McElreath discusses in his great book Statistical Rethinking:

"Statisticians do not in general exactly agree on how to analyze anything but the simplest of problems. The fact that statistical inference uses mathematics does not imply that there is only one reasonable or useful way to conduct an analysis. Engineers use math as well, but there are many ways to build a bridge." 

This is why in applied economics so much of what we may consider as 'best practices' are as much the result of social norms and practices as they are textbook theory. These norms are often established and evolve informally over time and sometimes adapted to the particulars of circumstances and place unique to a business or decision making environment, or research discipline (this explains the language barriers for instance between economists and epidemiologists and why different language can be used to describe the same thing and the same language can mean different things to different practitioners). A kind of result of human action but not human design, many best practices may seldom be formally codified or published in a way accessible to train a chatbot to read and understand. Would an algorithm be able to understand and relay back this nuance? I gave this a try by asking chatGPT about linear probability models (LPMs), and while I was impressed with some of the detail, I'm not fully convinced at this point based on the answers I got. While it did a great job articulating the pros and cons of LPMs vs logistic regression or other models, I think it would leave the casual reader with the impression that they should be wary of relying on LPMs to estimate treatment effects in most situations. So they miss out on the practical benefits (the 'pros' that come from using LPMs) while avoiding the 'cons' that as Angrist and Pischke might say, are mostly harmless. I would be concerned about more challenging econometric problems with more nuance and more appeal to social norms and practices and thinking that an LLM may not be privy to.

ChatGPT as a Research Assistant

Outside of actually doing applied econometrics and causal inference, I have additional concerns with LLMs and AI when it comes to using them as a tool for research and learning. At first it might seem really great if instead of reading five journal articles you could just have a tool like chatGPT do the hard work for you and summarize them in a fraction of the time! And I agree this kind of summary knowledge is useful, but probably not in the way many users might think. 

I have been thinking a lot about how much you get out of putting your hands on a paper or book and going through it and wrestling with the ideas, the paths leading from from hypotheses to the conclusions, and how the cited references let you retrace the steps of the authors to understand why, either slowly nudging your priors in new directions or reinforcing your existing perspective, and synthesizing these ideas with your own. Then summarizing and applying and communicating this synthesis with others. 

ChatGPT might give the impression that is what it is doing in a fraction of the time you could do it (literally seconds vs. hours or days). However, even if it gave the same summary you could write verbatim the difference couldn't be as far apart as night and day in terms of the value created. There is a big difference between the learning that takes place when you go through this process of integrative complex thinking vs. just reading a summary delivered on a silver platter from chatGPT. I’m skeptical what I’m describing can be outsourced to AI without losing something important. I also think there are real risks and costs involved when these tools are used like Dunning-Kruger-as-a-Service (DKaaS), where the critical thinking and actual learning begins and ends with prompt engineering and a response. 

When it comes to the practical application of this knowledge and thinking and solving new problems it’s not just what you know that matters, but how you know what you know. If all you have is a summary, will you know how to navigate the tradeoffs between what is theoretically correct and what is practically feasible to make the best decision in terms of what forking path to take in an analysis? Knowing about the importance of social norms and practices in doing applied work, and if the discussion above about LPMs is any indication, I'm not sure. And with just the summary, will you be able to quickly assimilate new developments in the field....or will you have to go back to chatGPT. How much knowledge and important nuance is lost with every update? What is missed? Thinking!

As Cassie says in her talk, thinking is about:

"knowing what is worth saying...knowing what is worth doing, we are thinking when we are coming up with ideas, when we are solving problems, when we are being creative"

AI is not capable of doing these things, and believing and even attempting or pretending that we can get these things on a second-handed basis from an AI tool will ultimately erode the real human skills and capabilities essential to real productivity and growth over the long run. If we fail to accept this we will hear a giant sucking sound that is the ROI we thought we were going to get from AI in the short run by attempting to automate what can't be automated. That is the false promise of a tools and technology mindset.

It worries me that this same tools and technology based data science alchemy mindset has moved many managers who were once were sold the snake oil that data scientists could simply spin data into gold with deep learning, will now buy into the snake oil that LLMs will be able to spin data into gold and do it even cheaper and send the thinkers packing! 

Similarly Cassie says: "that may be the biggest problem, that management has not learned how to manage thinking...vs. what you can measure easily....thinking is something you can't force, you can only get in the way of it."

She elaborates a bit more about this in her LinkedIn post: "A misguided view of productivity could mean lost jobs for workers without whom organizations won't be able to thrive in the long run - what a painful mistake for everyone."

Thunking vs. Thinking

I did say that this kind of summary info can be useful. And I agree that the kinds of things that AI and LLMs will be useful for are what Cassie refers to in her talk as 'thunking.'  The things that consume our time and resources but don't require thinking. Having done your homework, the kind of summary information you get from an LLM can help reinforce your thinking and learnings and save time in terms of manually googling or looking up a lot of things you once knew but have forgotten. If there is an area you haven't thought about in a while it can be a great way to help get back up to speed. And when trying to learn new things, it can be leveraged to speed up some aspects of your discovery process or make it more efficient, or even help challenge or vet your thinking (virtually bouncing ideas back and forth). But to be useful, this still requires some background knowledge and should never be a substitute for putting your hands on a paper and doing the required careful and critical thinking.

One area of applied econometrics I have not mentioned is the often less glamorous work it takes to implement a solution. In addition to all the thinking involved in translating the solution and navigating the forking paths, there is a lot of time spent accessing and transforming the data and implementing the estimation that involves coding (note even in the midst of all that thunking work there is still thinking involved - sometimes we learn the most about our business and our problem while attempting to wrangle the data - so this is also a place where we need to be careful about what we automate). Lots of data science folks are also using these tools to speed up some of their programming tasks. I'm a habitual user of stack-exchange and git hub and constantly recycle my own code or others' code. But I burn a lot of time somedays in search of what I need. That's the kind of thunking that it makes since to enlist new AI tools for!

Conclusion: Thinking is Our Responsibility

I've observed two extremes when it comes to opinions about tools like ChatGPT. One is that LLMs have the knowledge and wisdom of Yoda and will solve all of our problems. The other extreme is that because LLMs don't have the knowledge and wisdom of Yoda they are largely irrelevant. Obviously there is middle ground and I am trying to find it in this post. And I think Cassie has found it:

"AI does not automate thinking. It doesn't! There is a lot of strange rumblings about this that sound very odd to me who has been in this space for 2 decades"

I have sensed those same rumblings and it should make us all feel a bit uneasy. She goes on to say:

"when you are not the one making the decision and it looks like the machine is doing it, there is someone who is actually making that decision for you...and I think that we have been complacent and we have allowed our technology to be faceless....how will we hold them accountable....for wisdom...thinking is our responsibility"

Thinking is a moral responsibility. Outsourcing our thinking and fooling ourselves into thinking we can get knowledge and wisdom and judgment second-handed from a summary written by an AI tool, and to believe that is the same thing and provides the same value as what we could produce as thinking humans is a dangerous illusion when ultimately, thinking is the means by which the human race and civil society ultimately thrives and survives. In 2020 former President Barak Obama emphasized the importance of thinking in a democracy: 

"if we do not have the capacity to distinguish what's true from what's false, then by definition the marketplace of ideas doesn't work. And by definition our democracy doesn't work. We are entering into an epistemological crisis." 

The wrong kind of tools and technology mindset, and obsequiousness toward the technology, and a second-handed tendency to believe that we can or should be outsourcing, nay, sacrificing our thinking to AI in exchange for misleading if not false promises about value, is philosophically and epistemically disturbing.

LLMs bring to the table incredible capabilities and efficiencies and opportunities to create value. But we have to be very careful to recognize as Philip Tetlock describes in his book Superforecasters, that there is a difference between mimicking and reflecting meaning vs. originating meaning.  To recognize that it’s not just what you know that matters, but how you know what you know. To repurpose the closing statements from the book Mostly Harmless Econometrics: If applied econometrics were easy, LLMs could do it.

Additional Resources:

Thunking vs Thinking: Whose Job Does AI Automate? Which tasks are on AI’s chopping block? Cassie Kozrykov. https://kozyrkov.medium.com/thunking-vs-thinking-whose-job-does-ai-automate-959e3585877b

Statistics is a Way of Thinking Not a Just a Box of Tools. https://econometricsense.blogspot.com/2020/04/statistics-is-way-of-thinking-not-just.html 

Will There Be a Credibility Revolution in Data Science and AI? https://econometricsense.blogspot.com/2018/03/will-there-be-credibility-revolution-in.html 

Note on updates: An original version of this post was written on July 29 in conjunction with the post On LLMs and LPMs: Does the LL in LLM Stand for Linear Literalism? Shortly after posting I ran across Cassie's talk and updated to incorporate many of the points she made, with the best of intentions. Any  misrepresentation/misappropriation of her views is unintentional. 

On LLMs and LPMs: Does the LL in LLM Stand for Linear Literalism?

 I've blogged in the past about what I call linear literalism and fundamentalist econometrics. And I've blogged a bit about linear probability models (LPMs). Recently I have had some concerns about people outsourcing their thinking to LLMs and the use of these tools like Dunning-Kruger-as-a-Service (DKaaS) where the critical thinking and actual learning starts and stops with prompt engineering and a response. Out of curiosity I asked ChatGPT about the appropriateness of using linear probability models. Although the overall response was thoughtful about thinking more carefully about causality, it still gave the canned 'thou shalt not'  theoretically correct fundamentalist response. My prompt could have been more sophisticated, but I tried to prompt from a user's prospective, someone who may not be as familiar with applied statistics work, or who may have even read my blog and wanted to question something about the use of LPMs and may not be thinking about the tradeoffs or who may be unfamiliar with the social norms and practices related to their use.  As has been noted before on this blog, in applied work, there is no consensus among practitioners that nonlinear models (like logistic regression) are 'better' than LMPs when estimating treatment effects. If anything this illustrates at best, a response from an LLM about applied econometric analysis could be just as good as having another expert in the room, but an experienced practitioner understands that experts often disagree, and that disagreement comes with a lot of nuance, and is often as much the result of social norms and practices as theory. Perhaps someone could take the fundamentalist response from this prompt and do their analysis and solve their problem and there is no harm at the end of the day. But there is danger in fundamentalism, if this leads them to ignore great work and potential learnings derived from LPMs, or prevents them from getting more actionable and interpretable results vs. stumbling through the mental gymnastics sometimes required to interpret things like odds ratios vs. marginal effects! 

Note: As I refined my prompt I got slightly better answers, but not one that really captures the nuance I would like to have. Again....it's like arguing with an expert that knows the theory but may not be as familiar with all the nuances of different disciplines and the practical implications of the tradeoffs involved. I have to say - to give a response close to what I would expect from another practitioner that maybe outside the field of economics, is impressive! 

While it did a great job articulating the pros and cons of LPMs vs logistic regression or other models, I think these responses would leave the casual reader with the impression that they should be wary of relying on LPMs to estimate treatment effects in most situations. And as noted above that could be an issue depending on the situation.  Because I disagree with the tone of the response doesn't mean chatGPT is a bad tool, there are great textbooks and econometricians that I think would certainly endorse these responses! Its the missing nuance around these differences in opinion that concern me and while its not super critical in this situation, I'd be concerned about others. 

And I have to disagree that some of the 'cons' listed about LPMs are really as big of an issue that a reader might think if they take this response very seriously. Most of the listed 'cons' can often be addressed easily and in my experience despite all the alleged 'cons' LPMs generally lead to the same decision you would make when properly interpreting and understanding results from a logistic regression, while providing all the very practical benefits you get from an LPM.  The nuance is, the 'cons' in theory don't always have implications that matter in any practical sense. As Angrist and Pischke state: "there is a lot to be said for sticking to a linear regression function as compared to a fairly arbitrary choice of a non-linear one!"

Prompt: Is it OK to use a linear probability model to measure causal effects?

Answer:

Using a linear probability model to measure causal effects can be problematic and is generally not recommended. A linear probability model (LPM) is a linear regression model where the dependent variable is a binary outcome, typically coded as 0 or 1. In this model, the predicted values can be outside the [0, 1] range, which can lead to biased estimates and incorrect interpretations.

There are several issues with using a linear probability model for causal effects:

1. Heteroscedasticity: The variance of the error term in the linear probability model is not constant across all values of the independent variables, violating one of the assumptions of linear regression. This can lead to inefficient estimates and incorrect standard errors.

2. Predicted probabilities outside [0, 1]: Since linear regression does not constrain the predicted values to be between 0 and 1, it can produce probabilities that fall outside this range, which is not meaningful for binary outcomes.

3. Non-linearity: The relationship between the independent variables and the probability of the binary outcome is often non-linear. Using a linear model may not capture this relationship accurately.

To address these issues, researchers typically use alternative methods for estimating causal effects with binary outcomes. The most common approaches are logistic regression and probit regression. These models are specifically designed for binary outcomes and provide estimates that are valid probabilities between 0 and 1.

Additionally, for estimating causal effects, researchers often use methods such as randomized controlled trials (RCTs) or quasi-experimental designs (e.g., difference-in-differences, matching methods, instrumental variables) to establish causality more convincingly.

In summary, while linear probability models might be used for exploratory analysis, they are generally not suitable for measuring causal effects with binary outcomes. Logistic regression or probit regression should be preferred when dealing with binary dependent variables, and careful consideration of study design and potential confounders is essential for establishing causality.

Additional Reading:

Love It or Logit, or: Man, People *Really* Care About Binary Dependent Variables. Marc Bellemare. http://marcfbellemare.com/wordpress/9024

Whether to probit or to probe it: in defense of the Linear Probability Model. World Bank Blogs. Jed Friedman. https://blogs.worldbank.org/impactevaluations/whether-to-probit-or-to-probe-it-in-defense-of-the-linear-probability-model 

Thursday, March 2, 2023

Are Matching Estimators and the Conditional Independence Assumption Inconsistent with Rational Decision Making

 Scott Cunningham brings up some interesting points about matching and utility maximization in this substack post: https://causalinf.substack.com/p/why-do-economists-so-dislike-conditional 

"Because most of the time, when you are fully committed to the notion that people are rational, or at least intentionally pursuing goals and living in the reality of scarcity itself, you actually think they are paying attention to those potential outcomes. Why? Because those potential outcomes represent the gains from the choice you’re making....if you think people make choices because they hope the choice will improve their life, then you believe their choices are directly dependent on Y0 and Y1. This is called “selection on treatment gains”, and it’s a tragic problem that if true almost certainly means covariate adjustment won’t work....Put differently, conditional independence essentially says that for a group of people with the same covariate values, their decision making had become erratic and random. In other words, the covariates contained the rationality and you had found the covariates that sucked that rationality out of their minds."

This makes me want to ask - is there a way I can specify utility functions or think about utility maximization that is consistent with the CIA in a matching scenario? This gets me into very dangerous territory because my background is applied economics, not theory. I think most of the time when matching is being used in observational settings, people aren't thinking about utility functions and consumer preferences and how they relate to potential outcomes. Especially non-economists. 

Thinking About Random Utility Models

The discussion above for some reason motivated me to think about random utility models (RUMs). Not being a theory person and not having worked with RUMs hardly at all, I'm being even more dangerous but hear me out, this is just a thought experiment. 

I first heard of RUMs years ago when working in market research and building models focused on student enrollment decisions. From what I understand they are an important work horse in discrete choice modeling applications. Food economist Jayson Lusk has even looked at RUMs and their predictive validity via functional magnetic resonance imaging (see Neural Antecedents of a Random Utility Model).

The equation below represents the basic components of a random utility model:

U = V + e

where = systemic utility and 'e' represents random utility. 

Consumers choose the option that provides the greatest utility. The systemic component 'V' captures attributes describing the alternative choices or perceptions about the choices, and characteristics of the decision maker.  In the cases where matching methods are used in observational settings, the relevant choice is often whether or not to participate in a program or take treatment.

This seems to speak to one of the challenges raised in Scott's post (keep in mind Scott never mentions RUMS, all this about RUMS are my meandering so if a discussion about RUMs is non-sensical its on me not him): 

"The known part requires a model, be it formal or informal in nature, and the quantified means it’s measured and in your dataset. So if you have the known and quantified confounder, then a whole host of solutions avail themselves to you like regression, matching, propensity scores, etc....There’s a group of economists who object to this statement, and usually it’s that “known” part."

What seems appealing to me is that RUMs appear to allow us to make use of what we think we can know about utility via 'V' and still admit that there is a lot we don't know, captured by 'e' in a random utility model. In this formulation 'e' still represents rationality, it's just unobservable heterogeneity in rational preferences that we can't observe. This is assumed to be random. Many economists working in discrete choice modeling contexts are apparently comfortable with the 'known' part of a RUM at least from the way I understand this.

A Thought Experiment: A Random Utility Model for Treatment Participation

Again - proceeding cautiously here, suppose that in an observational setting the decision to engage in a program or treatment designed to improve outcome Y is driven by systematic and random components in a RUM:

U = V(x) + e

and the decision to participate is based on as Scott describes the potential outcomes Y1 and Y0 which represent the gains from choosing. 

delta = (Y1 - Y0) where you get Y1 for choosing D=1 and Y0 for D=0

In the RUM you choose D = 1 if U(D = 1) > U(D = 0) 

D = f(delta) = f(Y1,Y0)= f(x)

and we specify the RUM as U(D) = V(x) + e

where x represents all the observable things that might contribute to an individual's utility (perceptions about the choices, and characteristics of the decision maker) in relation to making this decision. 

So the way I wanted to think about this is when we are matching, the factors we match/control for would be the observable variables 'x' that contribute to systemic utility V(x), while many of the unobservable aspects reflect heterogeneous preferences across individuals that we can't observe. This would contribute to the random component of the RUM. 

So in essence YES, if we think about this in the context of a RUM, the covariates contain all of the rationality (at least the observable parts) and what is unobserved can be modeled as random. We've harmonized utility maximization, matching and the CIA! 

Meeting the Assumptions of Random Utility and the CIA

But wait...not so fast. In the observational studies where matching is deployed, I am not sure we can assume the unobserved heterogeneous preferences represented by 'e' will be random across the groups we are comparing.  Those who choose D =1 will have obvious differences in preferences than those who choose D = 0. There will be important differences between treatment and control groups' preferences not accounted for by covariates in the systemic component V(x) and those unobserved preferences in 'e' will be dependent on potential outcomes Y0 and Y1 just like Scott was saying. I don't think we can assume in an observational setting with treatment selection that the random component of the RUM is really random with regard to the choice of taking treatment if the choice is driven by expected potential outcomes. 

Some Final Questions

If 'x' captures everything relevant to an individual's assessment of their potential outcomes Y1 and Y0 (and we have all the data for 'x' which itself is a questionable assumption) then could we claim that everything else captured by the term 'e' is due to random noise - maybe pattern noise or occasion noise

In an observational setting where we are modeling treatment choice D, can we break 'e' down further into components like below?

e = e1 + e2

where e1 is unobservable heterogeneity in rational preferences driven by potential outcomes Y1 & Y0 making it non random and e2 represents noise that is more random like pattern or occasion noise and likely to be independent of Y1 & Y0. 

IF the answer to the questions above is YES and we can decompose the random component of RUMS this way and e2 makes up the largest component of e (i.e  e1 is small, non-existent, or insignificant),  then maybe a RUM is a valid way to think about modeling the decision to choose treatment D and we can match on the attributes of systemic utility 'x' and appeal to the CIA (if my understanding is correct).

But the less we actually know about x and what is driving the decision as it relates to potential outcomes Y0 and Y1, the larger e1 becomes and then the random component of a RUM may no longer be random. 

If my understanding above is correct, then the things we likely would have to assume for a RUM to be valid turn out to be similar to if not exactly the things we need for the CIA to hold. 

The possibility of meeting the assumptions of a RUM or the CIA would seem unlikely in observational settings if (1) we don't know a lot about systemic utility and 'x' and (2) the random component  e turns out not to be random. 

Conclusion

So much for an applied guy trying to do theory to support the possibility of the CIA holding in matched analysis.  I should say I am not an evangelist for matching but trying to be more of a realist about its uses and validity.  Scott's post introduces a very interesting way to think about matching and the CIA and the challenges we might have meeting the conditions for it. 


Thursday, January 26, 2023

What is new and different about difference-in-differences?

Back in 2012 I wrote about the basic 2 x 2 difference in difference analysis (two groups, two time periods). Columbia public health probably has a better introduction. 

The most famous example of an analysis that motivates a 2 x 2 DID analysis is John Snow's 1855 analysis of the cholera epidemic in London:




(Image Source)

 I have since written about some of the challenges of estimating DID with glm models (see here, here, and here.), as well as combining DID with matching, and problems to watch out for when combining methods. But a lot of what we know about difference in differences has changed in the last decade. I'll try to give a brief summary based on my understanding and point towards some references that do a better job presenting the current state.

The Two-Way Fixed Effects model (TWFE)

The first thing I should discuss is extending the 2x2 model to include multiple treated groups and/or multiple time periods. The generalized model for DiD also referred to as the two-way fixed effects (TWFE) model is the best way to represent those kind of scenarios:.

Ygt = a+ b+ δDgt + εt

a= group fixed effects

b= time fixed effects

Dgt= treatment*post period (interaction term)

δ = ATT or DID estimate

Getting the correct standard errors for DID models that involve many repeated measures over time and/or where treatment and control groups are defined by multiple geographies presents two challenges compared to the basic 2x2 model. Serial correlation and correlation within groups. There are several approaches that can be considered depending on your situation.

1 - Block bootstrapping

2 - Aggregating data into single pre and post periods

3 - Clustering standard errors at the group level

Clustering at the group level should provide the appropriate standard errors in these situations when the number of clusters are large.

For more details on TWFE models, both Scott Cunningham and Nick Huntington-Klein have great econometrics textbooks with chapters devoted to these topics. See the references below for more info.

Differential Timing and Staggered Rollouts

But things can get even more complicated with DID designs. Think about situations where there are different groups getting treated at different times over a number of time periods. This is not just a thought experiment trying to imagine the most difficult study design and pondering for the sake of pondering – these kind of staggered rollouts are very common in business and policy settings.  Imagine policy rules adopted by different states over time (like changes in minimum wages) or imagine testing a new product or service by rolling it out to different markets over time. Understanding how to evaluate their impact is important. For a while it seemed economists may have been a little guilty of handwaving with the TWFE model assuming the estimated treatment coefficient was giving them the effect they wanted. 

But Andrew Goodman-Bacon refused to take this interpretation at face value and broke this down for us determining that the TWFE estimator was trying to give us a weighted average of all potential 2x2 DID estimates you could make with the data. That actually sounds intuitive and helpful. But what he discovered that is not so intuitive is that some of those 2x2 comparisons could be comparing previously treated groups with current treated groups. That's not a comparison we generally are interested in making, but it gets averaged in with the others and can drastically bias the results particularly when there is treatment effect heterogeneity (the treatment effect is different across groups and trending over time). 

So how do you get a better DID estimate in this situation? I'll spare you the details (because I'm still wrestling with them) but the answer seems to be the estimation strategy developed by Callaway and Sant'Anna. The documentation in R for their package walks through a lot of the details and challenges with TWFE models with differential timing. 

Additionally this video of Andrew Goodman-Bacon was really helpful for understanding the 'Bacon' decomposition of TWFE models and the problems above.


After watching Goodman-Bacon, I recommend this talk from Sant'Anna discussing their estimator. 

Below Nick Huntington-Klein provides a great summary of the issues made apparent by the Bacon decomposition made above and the Callaway and Sant'Anna method for staggered/rollout DID designs. he also gets into the Wooldridge Mundlack approach:

A Note About Event Studies

In a number of references I have tried to read to understand this issue, the term 'event study' is thrown around and it seems like every time it is used it is used differently but the author/speaker assumes we are all taking about the same thing. In this video Nick Huntington-Klein introduces event studies in a way that is the most clear and consistent. Watching this video might help.

References: 

Causal Inference: The Mixtape. Scott Cunningham. https://mixtape.scunning.com/ 

The Effect: Nick Huntington-Klein. https://theeffectbook.net/

Andrew Goodman-Bacon. Difference-in-differences with variation in treatment timing. Journal of Econometrics.Volume 225, Issue 2, 2021.

Brantly Callaway, Pedro H.C. Sant’Anna. Difference-in-Differences with multiple time periods. Journal of Econometrics. Volume 225, Issue 2, 2021,

Related Posts:

Modeling Claims Costs with Difference in Differences. https://econometricsense.blogspot.com/2019/01/modeling-claims-with-linear-vs-non.html 

Was It Meant to Be? OR Sometimes Playing Match Maker Can Be a Bad Idea: Matching with Difference-in-Differences. https://econometricsense.blogspot.com/2019/02/was-it-meant-to-be-or-sometimes-playing.html 


Saturday, October 29, 2022

The Value of Experimentation and Causal Inference in Complex Business Environments

Introduction




Summary: Causality in business means understanding how to connect the things we do with the value we create. A cause is something that makes a difference (Dave Lewis, Journal of Philosophy, 1973). If we are interested in what makes a difference in creating business value (what makes a difference in moving the truck above), we care about causality. Causal inference in business helps us create value by providing knowledge about what makes a difference so we can move resources from a lower valued use (having folks on the back of the truck) to a higher valued use (putting folks behind the truck). 


We might hear the phrase correlation is not causation so often that it could easily be dismissed as a cliche, as opposed to a powerful mantra for improving knowledge and decision making. These distinctions have an important meaning in business and applied settings. We could think of businesses as collections of decisions and processes that move and transform resources. Business value is created by moving resources from lower to higher valued uses.  Knowledge is the most important resource in a firm and the essence of organizational capability, innovation, value creation, and competitive advantage. Causal knowledge is no exception. Part 1 of this series discusses the knowledge problem and decisions.

In business talk can be cheap. With lots of data anyone can tell a story to support any decision they want to make. But good decision science requires more than just having data and a good story, it's about having evidence to support decisions so we can learn faster and fail smarter.  In the diagram above this means being able to identify a resource allocation that helps us push the truck forward (getting people behind the truck). Confusing correlation with causation might lead us to believe value is a matter of changing shirt colors vs. moving people. We don't want to be weeks, months, or years down the road only to realize that other things are driving outcomes, not the thing we've been investing in. By that time, our competition is too far ahead for us to ever to catch up and it may be too late for us to make up for the losses of misspent resources. This is why in business, we want to invest in causes, not correlations. We are ultimately going to learn either way, the question is about if we'd rather do it faster and methodically, or slower and precariously. 

How does this work? You might look at the diagram above and tell yourself - it's common sense where you need to stand to push the truck to move it forward - I don't need any complicated analysis or complex theories to tell me that. That's true for a simple scenario like that and likely so for many day to day operational decisions. Sometimes common sense or subject matter expertise can provide us with sufficient causal knowledge to know what actions to take. But when it comes to informing the tactical implementation of strategy (discussed in part 3 of this series) we can't always make that assumption. In complex business environments with high causal density (where the number of things influencing outcomes is numerous), we usually don't know enough about the nature and causes of human behavior, decisions, and causal paths from actions to outcomes to account for them well enough to know - what should I do?  What creates value? In complicated business environments intuition alone may not be enough - as I discuss in part 2 of this series we can be easily fooled by our own biases and biases in the data and the many stories that it could tell. 

From his experience with Microsoft, Ron Kohavi shares, up to 2/3 of the ideas we might test in a business environment turn out to either have flat results or harm the metric we are trying to improve. In Noise: A Flaw in Human Judgement authors share how often experts disagree with each other and even themselves at different times because of biases in judgement and decision making. As Stephen Wendel says you can't just wing it with bar charts and graphs when you need to know what makes a difference. 

In application, experimentation and casual inference represents a way of thinking that requires careful consideration of the business problem and all the ways that our data can fool us; separating signal from noise (statistical inference) and making the connection between actions and outcomes (causal inference). Experimentation and causal inference leverages good decision science that brings together theory and subject matter expertise with data so we can make better informed business decisions in the face of our own biases and the biases in data. In the series of posts that follow, I overview in more detail the ways that experimentation and causal inference help us do these things in complex business environments. 

The Value of Experimentation and Causal Inference in Complex Business Environments:

Part 1: The Knowledge Problem

Part 2: Behavioral Biases

Part 3: Strategy and Tactics