Frank Harrel writes Is Medicine Mesmerized by Machine Learning? Some time ago I wrote about predictive modeling and the differences between what the ROC curve may tell us and how well a model 'calibarates.'
There I quoted from the journal Circulation:
'When the goal of a predictive model is to categorize individuals into risk strata, the assessment of such models should be based on how well they achieve this aim...The use of a single, somewhat insensitive, measure of model fit such as the c statistic can erroneously eliminate important clinical risk predictors for consideration in scoring algorithms'
Not too long ago Dr. Harrel shares the following tweet related to this:
I have seen hundreds of ROC curves in the past few years. I've yet to see one that provided any insight whatsoever. They reverse the roles of X and Y and invite dichotomization. Authors seem to think they're obligatory. Let's get rid of 'em. @f2harrell 8:42 AM - 1 Jan 2018
In his Statistical Thinking post above, Dr. Harrel writes:
"Like many applications of ML where few statistical principles are incorporated into the algorithm, the result is a failure to make accurate predictions on the absolute risk scale. The calibration curve is far from the line of identity as shown below...The gain in c-index from ML over simpler approaches has been more than offset by worse calibration accuracy than the other approaches achieved."
i.e. depending on the goal, better ROC scores don't necessarily mean better models.
But this post was about more than discrimination and calibration. It was discussing the logistic regression approach taken in Exceptional Mortality Prediction by Risk Scores from Common Laboratory Tests vs the deep learning approach used in Improving Palliative Care with Deep Learning.
"One additional point: the ML deep learning algorithm is a black box, not provided by Avati et al, and apparently not usable by others. And the algorithm is so complex (especially with its extreme usage of procedure codes) that one can’t be certain that it didn’t use proxies for private insurance coverage, raising a possible ethics flag. In general, any bias that exists in the health system may be represented in the EHR, and an EHR-wide ML algorithm has a chance of perpetuating that bias in future medical decisions. On a separate note, I would favor using comprehensive comorbidity indexes and severity of disease measures over doing a free-range exploration of ICD-9 codes."
This kind of pushes back against the idea that deep neural nets can effectively bypass feature engineering, or at least raises cautions in specific contexts.
Actually, he is not as critical of the authors of this paper as he is about what he considers undue accolades it has received.
This ties back to my post on LinkedIn a couple weeks ago, Deep Learning, Regression, and SQL.
To Explain or Predict
Big Data: Causality and Local Expertise Are Key in Agronomic Applications
Feature Engineering for Deep Learning
In Deep Learning, Architecture Engineering is the New Feature Engineering