Monday, September 30, 2019

Wicked Problems and The Role of Expertise and AI in Data Science

In 2018, an article in Science characterized the challenge of pesticide resistance as a wicked problem:

“If we are to address this recalcitrant issue of pesticide resistance, we must treat it as a “wicked problem,” in the sense that there are social, economic, and biological uncertainties and complexities interacting in ways that decrease incentives for actions aimed at mitigation.”

In graduate school, I worked on this same problem, attempting to model the social and economic systems with game theory and behavioral economics and capturing biological complexities leveraging population genetics. 

Wicked vs. Kind Environments

In data science, we also have 'wicked' learning environments in which we try to train our models. In the EconTalk podcast with Russ Roberts, Mastery, Specialization, and Range, David Epstein discusses wicked and kind learning environments:

"The way that chess works makes it what's called a kind learning environment. So, these are terms used by psychologist Robin Hogarth. And what a kind learning environment is, is one where patterns recur; ideally a situation is constrained--so, a chessboard with very rigid rules and a literal board is very constrained; and, importantly, every time you do something you get feedback that is totally see the consequences. The consequences are completely immediate and accurate. And you adjust accordingly. And in these kinds of kind learning environments, if you are cognitively engaged you get better just by doing the activity."

"On the opposite end of the spectrum are wicked learning environments. And this is a spectrum, from kind to wicked. Wicked learning environments: often some information is hidden. Even when it isn't, feedback may be delayed. It may be infrequent. It may be nonexistent. And it maybe be partly accurate, or inaccurate in many of the cases. So, the most wicked learning environments will reinforce the wrong types of behavior."

As discussed in the podcast, many problems fall within some spectrum ranging between very kind environments like Chess to more complex environments like self driving cars or medical diagnosis. What do experts have to offer where AI/ML falls short? The type of environment determines to a great extent the scope of disruption we might be able to expect from AI applications.

The Role of Human Expertise

In Thinking Fast and Slow, Kahneman discusses two conditions for acquiring skill:

1) an environment that is sufficiently regular to be predictable
2) an opportunity to learn these regularities through prolonged practice

This sounds a lot like the 'kind' environments discussed above. Based on research by Robin Hogarth, Kahneman also makes these distinctions describing 'wicked' environments as those environments in which those with expertise are likely to learn the wrong lessons from experience. The problem is that with wicked environments, experts often default to heuristics which can lead to wrong conclusions. Even if aware of these biases, social norms often nudge experts into the wrong direction. Kahneman gives an example involving physicians:

"Generally it is considered a weakness and a sign of vulnerability for clinicians to appear unsure. Confidence is valued over uncertainty and there is a prevailing censure against disclosing uncertainty to patients...acting on pretended knowledge is often the preferred solution."

This likely explains many of the mistakes and low value care that are problematic with healthcare delivery as well as dissatisfaction with both the quality and costs of healthcare. How many of us want our physicians to pretend to know what they are talking about? On the other hand, how many people are willing to accept an answer from their physician that rhymes with "let me look this up and get back to you later." 

One advantage AI may have over experts in kind environments is as Kahneman puts it, the opportunity to learn through prolonged practice. Machine learning can handle many more training examples than a human so to speak.

Even in kind environments, an expert may swing and miss when dealing with cases where the correct decision is like a pitch straight over the plate. One reason Kahneman discusses in Thinking Fast and Slow is the idea of 'ego' depletion. This is related to the idea that mental energy can become exhausted after significant exertion. As self-control breaks down, its easy to default to heuristics and biases that can lead to decisions that look like careless mistakes. This would certainly apply to physicians given the number of stories we hear about burnout in the profession. 

The solution seems to be what polymath economist Tyler Cowen suggested several years ago in the econtalk podcast discussion he had about his book Average is Over with Russ Roberts:

"I would stress much more that humans can always complement robots. I'm not saying every human will be good at this. That's a big part of the problem. But a large number of humans will work very effectively with robots and become far more productive, and this will be one of the driving forces behind that inequality."

Imagine the clinical situation where a physician's 'ego' is substantially depleted from a difficult case. They could then lean on AI to prevent mistakes treating more routine decisions that follow. Or perhaps leveraging AI tools, a clinician could conserve additional mental energy throughout the day so that they are less likely to default to heuristics when they encounter more complex issues. The way this synergy materializes is uncertain, but it will certainly continue to involve substantial expertise on the part of many professionals going forward. Together human expertise and AI might have the greatest chance tackling the most wicked problems.


Wicked evolution: Can we address the sociobiological dilemma of pesticide resistance? | Science

Thinking Fast and Slow. Daniel Kahneman. 2011

EconTalk:David Epstein on Mastery, Specialization, and Range

EconTalk: Tyler Cowen on Inequality, the Future, and Average is Over