# 数据挖掘学习札记：ID3算法

1. Wiki上的 ID3 algorithm

2. 百度文库里的一个PPT，有算例， 决策树ID3算法
3. 百度文库，PPT，很多算例，开始有信息理论，极力推荐阅读ID3
4. 用Python实现ID3和C4.5 决策树ID3和C4.5算法Python实现源码

The ID3 algorithm is used by training on a dataset $S$ to produce adecision tree which is stored in memory. At runtime, this decision tree is used to classify new unseen test cases by working down the decision tree using the values of this test case to arrive at a terminal node that tells you what class this test case belongs to.

[plain] view plaincopy
1. ID3 (Examples, Target_Attribute, Attributes)
2.     Create a root node for the tree
3.     If all examples are positive, Return the single-node tree Root, with label = +.
4.     If all examples are negative, Return the single-node tree Root, with label = -.
5.     If number of predicting attributes is empty, then Return the single node tree Root,
6.     with label = most common value of the target attribute in the examples.
7.     Otherwise Begin
8.         A ← The Attribute that best classifies examples.
9.         Decision Tree attribute for Root = A.
10.         For each possible value, v_i, of A,
11.             Add a new tree branch below Root, corresponding to the test A = v_i.
12.             Let Examples(v_i) be the subset of examples that have the value v_i for A
13.             If Examples(v_i) is empty
14.                 Then below this new branch add a leaf node with label = most common target value in the examples
15.             Else below this new branch add the subtree ID3 (Examples(v_i), Target_Attribute, Attributes – {A})
16.     End
17.     Return Root

If S is a collection of 14 examples with 9 YES and 5 NO examples then

Entropy(S) = - (9/14) Log2 (9/14) - (5/14) Log2 (5/14)= 0.940

Notice entropy is 0 if all members of S belong to the same class(the data is perfectly classified). The range of entropy is 0("perfectly classified") to 1 ("totally random").

Gain(S, A) is information gain of example set S on attribute A is defined as

Gain(S, A) = Entropy(S) - S((|Sv| / |S|) * Entropy(Sv))

Where:

S is each value v of all possible values of attribute A

Sv = subset of S for which attribute A has value v

|Sv| = number of elements in Sv

|S| = number of elements in S

Suppose S is a set of 14 examples in which one of the attributes is wind speed. The values of Wind can be Weak or Strong.The classification of these 14 examples are 9 YES and 5 NO. For attribute Wind, suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. For Wind = Weak, 6 of the examples are YES and 2 are NO. For Wind = Strong, 3 are YES and3 are NO. Therefore

Gain(S,Wind)=Entropy(S)-(8/14)*Entropy(Sweak)-(6/14)*Entropy(Sstrong)

= 0.940 - (8/14)*0.811 - (6/14)*1.00

= 0.048

Entropy(Sweak) = - (6/8)*log2(6/8) - (2/8)*log2(2/8)= 0.811

Entropy(Sstrong) = - (3/6)*log2(3/6) - (3/6)*log2(3/6)= 1.00

For each attribute, the gain is calculated and the highest gain is used in the decision node.

Selects the attribute which has the smallest entropy (or largest information gain) value. ©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客 