Tuesday, April 21, 2020

The Value of Business Experiments Part 2: A Behavioral Economic Perspective

In my previous post I discussed the value proposition of business experiments from a classical economic perspective. In this post I want to view this from a behavioral economic perspective. From this point of view business experiments can prove to be invaluable with respect to challenges related to overconfidence and decision making under uncertainty.

Heuristic Data Driven Decision Making and Data Story Telling

 In a fast paced environment, decisions are often made quickly and often based on gut decisions. Progressive companies have tried as much as possible to leverage big data and analytics to be data driven organizations. Ideally, leveraging data would help to override biases and often gut instincts and ulterior motives that may stand behind a scientific hypothesis or business question. One of the many things we have learned from behavioral economics is that humans tend to over interpret data into unreliable patterns that lead to incorrect conclusions. Francis Bacon recognized this over 400 years ago:

"the human understanding is of its own nature prone to suppose the existence of more order and regularity in the world than it finds" 

Decision makers can be easily duped by big data, ML, AI, and various  BI tools into thinking that their data is speaking to them. As Jim Manzi and Stefan Thomke state in Harvard Business Review in the absence of formal randomized testing:

"executives end up misinterpreting statistical noise as causation—and making bad decisions"

Data seldom speaks, and when it does it is often lying. This is the impetus behind the introduction of what became the scientific method. The true art and science of data science is teasing out the truth, or what version of truth can be found in the story being told. I think this is where field experiments are most powerful and create the greatest value in the data science space. 

Decision Making Under Uncertainty, Risk Aversion, and The Dunning-Kruger Effect

Kahneman (in Thinking Fast and Slow) makes an interesting observation in relation to managerial decision making. Very often managers reward peddlers of even dangerously misleading information while disregarding or even punishing merchants of truth. Confidence in a decision is often based more on the coherence of a story than the quality of information that supports it. Those that take risks based on bad information, when it works out, are often rewarded. To quote Kahneman:

"a few lucky gambles can crown a reckless leader with a Halo of prescience and boldness"

As Kahneman discusses in Thinking Fast and Slow, those that often take the biggest risks are not necessarily any less risk averse, they simply are often less aware of the risks they are actually taking. This leads to overconfidence and lack of appreciation for uncertainty, and a culture where a solution based on pretended knowledge is often preferred and even rewarded. Its easy to see how the Dunning-Kruger effect would dominate. This feeds a viscous cycle that leads to collective blindness toward risk and uncertainty. It leads to taking risks that should be avoided in many cases, and prevents others from considering better but perhaps less audacious risks. Field experiments can help facilitate taking more educated gambles. Thinking through an experimental design (engaging Kahneman's system 2) provides a structured way of thinking about business problems and how to truly leverage data to solve them. And the data we get from experimental results can be interpreted causally. Identification of causal effects from an experiment helps us distinguish if outcomes are likely due to a business decision, as opposed to blindly trusting gut instincts, luck, or the noisy patterns we might find in the data. 

Just as rapid cycles of experiments in a business setting can aid in the struggle with the knowledge problem, they also provide an objective and structured way of thinking about our data and the conclusions we can reach from it while avoiding as much as possible some of these behavioral pitfalls. A business culture that supports risk taking coupled with experimentation will come to value a preferred solution over pretended knowledge. That's valuable. 

See also:



Monday, April 20, 2020

The Value of Business Experiments and the Knowledge Problem

Why should firms leverage randomized business experiments? With recent advancements in computing power and machine learning, why can't they simply base all of their decisions on historical observational data? Perhaps statisticians and econometricians and others have a simple answer. Experiments may be the best (often the golden standard) way of answering causal questions. I certainly can't argue against answering causal questions (just read this blog). However, here I want to focus on a number of more fundamental reasons that experiments are necessary in business settings from the perspective of both classical and behavioral economics:

1) The Knowledge Problem
2) Behavioral Biases
3) Strategy and Tactics

In this post I want to discuss the value of business experiments from more of a neoclassical economic perspective. The fundamental problem of economics, society, and business is the knowledge problem. In his famous 1945 American Economic Review article The Use of Knowledge in Society, Hayek argues:

"the economic problem of society is not merely a problem of how to allocate 'given resources'....it is a problem of the utilization of knowledge which is not given to anyone in its totality."

A really good parable explaining the knowledge problem is the essay I, Pencil by Leonard E. Read. The fact that no one person possesses the necessary information to make something that seems so simple as a basic number 2 pencil captures the essence of the knowledge problem.

If you remember your principles of economics, you know that the knowledge problem is solved by a spontaneous order guided by prices which reflect tradeoffs based on the disaggregated incomplete and imperfect knowledge and preferences of millions (billions) of individuals. Prices serve both the function of providing information and the incentives to act on that information. It is through this information creation and coordinating process that prices help solve the knowledge problem.

Prices solve the problem of calculation that Hayek alluded to in his essay, and they are what coordinate all of the activities discussed in I, Pencil. The knowledge problem explains how market economies work, while at the same time, socially planned economies historically have failed to allocate resources in a manner that has not resulted in shortages, surpluses, and collapse.

In Living Economics: Yesterday, Today, and Tommorow by Peter J. Boettke, he discusses the knowledge problem in the context of firms and the work of economics Murray Rothbard:

"firms cannot vertically integrate without facing a calculation problem....vertical integration elminates the external market for producer goods."

In essence, and this seems consistent with Coase, as firms integrate to eliminate transactions costs they also eliminate the markets which generate the prices that solve the knowledge problem! In a way firms could be viewed as little islands with socially planned economies in a sea of market competition. As Luke Froeb masterfully illustrates in his text Managerial Economics: A Problem Solving Approach (3rd Ed), decisions within firms in effect create regulations, taxes, and subsidies that destroy wealth creating transactions. Managers should make decisions that consummate the most wealth creating transactions (or do their best not to destroy, discourage, or prohibit wealth creating transactions).

So how do we solve the knowledge problems in firms without the information creating and coordinating role of prices? Whenever mistakes are made, Luke Froeb provides this problem solving algorithm that asks:

1) Who is making the bad decision?
2) Do they have enough information to make a good decision?
3) Do they have the incentive to make a good decision?

In essence, in absence of prices, we must try to answer the same questions that prices often resolve. And we could leverage business experiments to address the second question above. Experiments can provide important causal decision making information. While I would never argue that data science, advanced analytics, artificial intelligence, or any field experiment could ever solve the knowledge problem, I will argue that business experiments become extremely valuable because of the knowledge problem within firms.

Going back to I, Pencil and Hayek's essay, the knowledge problem is solved through the spontaneous coordination of multitudes of individual plans via markets. Through a trial and error process where feedback is given through prices, the plans that do the best job coordinating peoples choices are adopted. Within firms there are often only a few plans compared to the market through various strategies and tactics. But as discussed in Jim Manzi's book Uncontrolled, firms can mimic this trial and error process through iterative experimentation interspersed with theory and subject matter expertise. Experiments help establish causal facts, but it takes theory and subject matter expertise to understand which facts are relevant.

In essence, while experiments don't perfectly emulate the same kind of evolutionary feedback mechanisms prices deliver in market competition, an iterative test and learn culture within a business may provide the best strategy for dealing with the knowledge problem. And that is one of many ways that business experiments are able to contribute value.

See also:

Statistics is a Way of Thinking, Not a Box of Tools

Monday, April 6, 2020

Statistics is a Way of Thinking, Not Just a Box of Tools

If you have taken very many statistics courses you may have gotten the impression that it's mostly a mixed bag of computations and rules for conducting hypothesis tests or making predictions or creating forecasts. While this isn't necessarily wrong, it could leave you with the opinion that statistics is mostly just a box of tools for solving problems. Absolutely statistics provides us with important tools for understanding the world, but to think of statistics as 'just tools' can have some pitfalls (besides the most common pitfall of having a hammer and viewing every problem as a nail)

For one, there is a huge gap between the theoretical 'tools' and real world application. This gap is filled with critical thinking, judgment calls, and various social norms, practices, and expectations that differ from field to field, business to business, and stakeholder to stakeholder. The art and science of statistics is often about filling this gap. That's a stretch more than 'just tools.'

The proliferation of open source programming languages (like R and Python) and point and click automated machine learning solutions (like DataRobot and H2Oai) might give the impression that after you have done your homework in framing the business problem, data and feature engineering, then all that is left is hyper-parameter tuning and plugging and playing with a number of algorithms until the 'best' one is found. It might reduce to a mechanical (sometimes time consuming if not using automated tools) exercise. The fact that a lot of this work can in fact be automated probably contributes to the 'toolbox' mentality when thinking about the much broader field of statistics as a whole. In The Book of Why, Judea Pearl provides an example explaining why statistical inference (particularly causal inference) problems can't be reduced to easily automated mechanical exercises:

"path analysis doesn't lend itself to canned programs......path analysis requires scientific thinking as does every exercise in causal inference. Statistics, as frequently practiced, discourages it and encourages "canned" procedures instead. Scientists will always prefer routine calculations on data to methods that challenge their scientific knowledge."

Indeed, a routine practice that takes a plug and play approach with 'tools' can be problematic in many cases of statistical inference. A good example is simply plugging GLM models into a difference-in-differences context. Or combining matching with difference-in-differences. While we can get these approaches to 'play well together' under the correct circumstances its not as simple as calling the packages and running the code. Viewing methods of statistical inference and experimental design as just a box of tools to be applied to data could leave one open to the plug and play fallacy. There are times you might get by with using a flathead screwdriver to tighten up a phillips head screw, but we need to understand that inferential methods are not so easily substituted even if it looks like a snug enough fit on the surface.

Understanding the business problem and data story telling are in fact two other areas of data science that would be difficult to automate . But don't let that fool you into thinking that the remainder of data science including statistical inference is simply a mechanical exercise that allows one to apply the 'best' algorithm to 'big data'. You might get by with that for a minority set of use cases that require a purely predictive or pattern finding solution but the remainder of the world's problems are not so tractable. Statistics is about more than data or the patterns we find in it. It's a way of thinking about the data.

"Causal Analysis is emphatically not just about data; in causal analysis we must incorporate some understanding of the process that produces the data and then we get something that was not in the data to begin with." - Judea Pearl, The Book of Why

Statistics is A Way of Thinking

In their well known advanced text book "Principles and Procedures of Statistics, A Biometrical Approach", Steel and Torrie push back on the attitude that statistics is just about computational tools:

"computations are required in statistics, but that is arithmetic, not mathematics nor statistics...statistics implies for many students a new way of thinking; thinking in terms of uncertainties of probabilities.....this fact is sometimes overlooked and users are tempted to forget that they have to think, that statistics cannot think for them. Statistics can however help research workers design experiments and objectively evaluate the resulting numerical data."

At the end of the day we are talking about leveraging data driven decision making to override biases and often gut instincts and ulterior motives that may stand behind a scientific hypothesis or business question.  Objectively evaluating numerical data as Steel and Torrie put it above. But what do we actually mean by data driven decision making? Mastering (if possible) statistics, inference, and experimental design is part of a lifelong process of understanding and interpreting data to solve applied problems in business and the sciences. It's not just about conducting your own analysis and being your own worst critic, but also about interpreting, criticizing, translating and applying the work of others. Biologist and geneticist Kevin Folta put this well once in a Talking Biotech podcast:

"I've trained for 30 years to be able to understand statistics and experimental design and interpretation...I'll decide based on the quality of the data and the experimental design....that's what we do."

In 'Uncontrolled' Jim Manzi states:

"observing a naturally occurring event always leaves open the possibility of confounded causes...though in reality no experimenter can be absolutely certain that all causes have been held constant the conscious and rigorous attempt to do so is the crucial distinction between an experiment and an observation."

Statistical inference and experimental design provide us with a structured way to think about real world problems and the data we have to solve them while avoiding as much as possible the gut based data story telling that intentional or not, can sometimes be confounded and misleading. As Francis Bacon once stated:

"what is in observation loose and vague is in information deceptive and treacherous"

Statistics provides a rigorous way of thinking that moves us from mere observation to useful information.

*UPDATE: Kevin Gray wrote a very good article that really gets at the spirit of a lot of what I wanted to convey in this post.

https://www.linkedin.com/pulse/statistical-thinking-nutshell-kevin-gray/

See also:

To Explain or Predict

Applied Econometrics