You may already be familiar with many versions of Bayes Rule:

p(y

_{i}|x) = p(x| y_{i})p(y_{i}) / p(x) orp(y|x) = p(x|y)p(y)/p(x) = p(x|y)p(y) / ∑

_{i}p(x| y_{i}) p(y_{i}) orp(y = y

_{i}|x = x_{k}) = p(x = x_{k}|y = y_{i})p(y = y_{i}) /∑_{j}p(x = x_{k}|y = y_{j})p(y = y_{j})Lets assume that y takes on ‘i’ distinct values or classes. If we know p(x|y

_{i}) and p(y_{i}) then we could use Bayes rule to calculate:p(y

_{i}|x) = p(x|y_{i})p(y_{i}) / p(x) (1)However, if x is represented as (x

_{1},…….x_{k}) then there are numerous combinations to consider for each class or value of y. This makes the computation of p(x|y_{i}) computationally demanding. One approach would be to simplify this calculation by making a ‘naïve’ assumption of conditional independence:Case where k = 2

p(x|y) = p(x

_{1},x_{2}|y) = p(x_{1}|y) p(x_{2}|y) = ∏ p(x_{k}|y_{i}) (2)*'conditional independence assumption'*the individual x-values are assumed to be independent, conditional on the value of y.

Under the conditional independence assumption, the number of calculations that need to be made for each value of y is decreased dramatically.

Substituting the calculation in (2) into (1) gives us the

**Naïve Bayes Classifier.**p(y

_{i}|x) = ∏ p(x_{k}|y_{i}) p(y_{i}) / p(x)Like ridge regression, the Naive Bayes Classifier is another example of a biased estimator that can outperform unbiased estimators given its lower variance. From 'Elements of Statistical Learning: data mining, inference and prediction:

*" Despite these rather optimistic assumptions, naive Bayes classifiers often outperform far more sophisticated alternatives.....although the individual class density estimates may be biased, this bias might not hurt the posterior probabilities as much, especially near the decision regions. In fact, the problem may be able to withstand considerable bias for the savings in variance such a “naive” assumption earns"*

Typically p(x|yi) and p(yi) are derived from a Training Set. Then applying these priors, we calculate the Posterior Probabilities using the Bayes Rule.

ReplyDeleteSince we assume the priors are 'given'.. isnt this infact a Frequentist way of thinking..

Aside question - can I take the posterior probabilities and calculate a GINI/AUC statistic and directly compare it with say a Discriminative method like say Logistic Regression?

I think the answer is it cannot - However, I would like to hear the readers comments.

If not, then what other metrics can we use to compare the performance (mis-classification rates, profit etc.??) Any thoughts?

I apologize for taking so long to moderate this and post, I've been out of town at conferences and had spotty internet access. To comment on your questions, I'm not sure. I've only began to think about bayesian methods. (you may check out some of my more recent bayesian tagged posts). They won't answer your questions though, their beyond my current comprehension of bayesian stats. In terms of comparing AUC, if the targets are the same, (such as a binary 0/1) I always thought that the area under the ROC curve was universally comparable across algorithms, say nerual net vs. logistic vs. decision tree etc. I'd be interested in other readers comments on all of the above as well.

ReplyDelete