1. Wiki上的 ID3 algorithm
The ID3 algorithm is used by training on a dataset to produce adecision
tree which is stored in memory. At runtime, this decision tree is used to classify new unseen test cases by working down the decision tree using the values of this test case to arrive at a terminal node that tells you what class this test case belongs
If S is a collection of 14 examples with 9 YES and 5 NO examples then
Notice entropy is 0 if all members of S belong to the same class(the data is perfectly classified). The range of entropy is 0("perfectly classified") to 1 ("totally random").
Gain(S, A) is information gain of example set S on attribute A is defined as
S is each value v of all possible values of attribute A
Sv = subset of S for which attribute A has value v
|Sv| = number of elements in Sv
|S| = number of elements in S
Suppose S is a set of 14 examples in which one of the attributes is wind speed. The values of Wind can be Weak or Strong.The classification of these 14 examples are 9 YES and 5 NO. For attribute Wind, suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. For Wind = Weak, 6 of the examples are YES and 2 are NO. For Wind = Strong, 3 are YES and3 are NO. Therefore
= 0.940 - (8/14)*0.811 - (6/14)*1.00
Entropy(Sweak) = - (6/8)*log2(6/8) - (2/8)*log2(2/8)= 0.811
Entropy(Sstrong) = - (3/6)*log2(3/6) - (3/6)*log2(3/6)= 1.00
For each attribute, the gain is calculated and the highest gain is used in the decision node.
Selects the attribute which has the smallest entropy (or largest information gain) value.