Here is an excerpt from the transcript:

Russ Roberts:

*But you have a small sample...--that's very noisy, usually. Very imprecise. And you still found statistical significance. That means, 'Wow, if you'd had a big sample you'd have found even a more reliable effect.'*

Andrew Gelman:

*Um, yes. You're using what we call the 'That which does not kill my statistical significance makes it stronger' fallacy. We can talk about that, too.*

From Andrew's Blog Post:

*"The idea is that statistical significance is taken as an even stronger signal when it was obtained from a noisy study.*

*This idea, while attractive, is wrong. Eric Loken and I call it the “What does not kill my statistical significance makes it stronger” fallacy.*

*"What went wrong? Why it is a fallacy? In short, conditional on statistical significance at some specified level, the noisier the estimate, the higher the Type M and Type S errors. Type M (magnitude) error says that a statistically significant estimate will overestimate the magnitude of the underlying effect, and Type S error says that a statistically significant estimate can have a high probability of getting the sign wrong.*

*We demonstrated this with an extreme case a couple years ago in a post entitled, “This is what “power = .06” looks like. Get used to it.” We were talking about a really noisy study where, if a statistically significant difference is found, it is guaranteed to be at least 9 times higher than any true effect, with a 24% chance of getting the sign backward."*

*Noted in both the podcast and the blog post by Gelman, this is not a well known fallacy and as they point out very well known researchers appear to be found committing it one time or another in their writing or dialogue.*

See also: Econometrics, Multiple Testing, and Researcher Degrees of Freedom

## No comments:

## Post a Comment