Occam razor vs. machine learning
Occam's razor (Russell's version)
If Russell was studying Machine Learning our days, he’d probably throw out all of the textbooks.
The reason is that ML introduces too many terms with subtle or no difference. It’s hard understand the scale of the problem without a good example.
You can ask your ML-experienced friends how many of the entities are there in this list of terms?
- mean squared error (MSE)
- logistic loss
- cross-entropy
- Kullback-Leibler (KL) divergence
- logarithmic scoring rule
- quadratic scoring rule
- Brier scoring rule
- Gini splitting criterion (Gini impurity) for decision tree (not Gini coefficient)
- entropy splitting criterion for decision tree
- binomial / multinomial deviance
- ordinary least squares
- mutual information
- logarithmic loss
- information gain
- softmax classifier
Not to name some trivial things like sum of squared errors is the same entity as MSE.
The answer is: 2.
This is enough to say that terminology used in ML today is very unfriendly and requires some minimization.
Group 1 contains different synonyms of MSE:
- mean squared error (MSE)
- quadratic scoring rule
- Brier scoring rule
- Gini splitting criterion (Gini impurity)
- ordinary least squares
It is not obvious that Gini is a particular version of MSE splitting, but I demonstrate this below.
Group 2 contains different versions of cross-entropy:
- logistic loss
- cross-entropy
- Kullback-Leibler (KL) divergence
- logarithmic scoring rule
- entropy splitting criterion for decision tree
- binomial / multinomial deviance
- mutual information
- logarithmic loss
- information gain
- softmax classifier
Entropy is cross-entropy of distribution with itself. KL-divergence is difference of cross-entropy and entropy.
There are many other ways to define the same concept in particular cases: softmax classifier corresponds to optimization of log-likelihood for Bernoulli distribution (in case of two classes) or categorical distribution (in case of more than two classes).
Non-trivial thing here: entropy splitting criterion is the same as logloss minimization.
Why Gini splitting = MSE minimization?
Simplest case is binary classification: $y_i = 0 \text{ or } 1$. Let’s first revisit MSE splitting criterion.
Suppose we split the feature space into leaves of a tree $ \text{leaf} = 1, 2, . . . , L $.
We are going to group summands according to the leaf each observation belongs to. Penalty for a single leaf depends only on the number of samples $n_\text{leaf, 0}, n_\text{leaf, 1}$ of classes 0 and 1:
So, while building decision tree, we greedily optimized the following global function (but each split optimizes only two terms in the sum):
Let’s also introduce $n_\text{leaf} = n_{\text{leaf}, 0} + n_{\text{leaf}, 1} $ – number of all samples in the leaf and portions of class 1 in leaf: $ p_{\text{leaf}, 1} = \dfrac{n_{\text{leaf}, 1}}{n_{\text{leaf}}} $ . Using the new terms we have:
… which is nothing but Gini impurity summed over leaves. It is easy to notice that in the case of classification with multiple classes we need to first perform one-hot encoding and then minimize MSE to get the same splitting rule as provided by Gini.
Why Entropy splitting = LogLoss minimization?
Let’s start from writing down the LogLoss for the decision tree:
(where $y_{i,c}$ is 1 if event $i$ belongs to class $c$ and 0 otherwise; $p_{i,c}$ is a probability for $t$th sample to belong to class $c$, predicted by a tree).
When selecting optimal predictions in a leaf, we optimize:
Easy to prove that an optimal point is $p_{\text{leaf}, c} = \dfrac{n_{\text{leaf}, c}}{n_{\text{leaf}}} $
… and we got entropy impurity (without minus sign, but that’s the only difference).
See also
- Statistics vs. Machine Learning, fight!
- my post about ROC curve with tons of definitions for binary classification problem