决策树原理:
使用计算机算法根据数据自动找出决策边界
参考下图:
决策树的python代码(sklearn)
链接:http://scikit-learn.org/stable/modules/tree.html
>>> from sklearn import tree
>>> X = [[0, 0], [1, 1]]
>>> Y = [0, 1]
>>> clf = tree.DecisionTreeClassifier()
>>> clf = clf.fit(X, Y)
决策树可以调的参数:sklearn.tree.DecisionTreeClassifier
DecisionTreeClassifier(criterion=’gini’, splitter=’best’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)
练习:写出当最小分割为50和2时的准确性
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
clf_50=DecisionTreeClassifier(min_samples_split=50)
clf_50.fit(features_train,labels_train)
pre50=clf_50.predict(features_test)
acc_min_samples_split_50=accuracy_score(labels_test,pre50)
clf=DecisionTreeClassifier()
clf=clf.fit(features_train,labels_train)
pre2=clf.predict(features_test)
acc_min_samples_split_2=accuracy_score(labels_test,pre2)
def submitAccuracies():
return {"acc_min_samples_split_2":round(acc_min_samples_split_2,3),
"acc_min_samples_split_50":round(acc_min_samples_split_50,3)}
熵和杂质
例子:当某个路段有限速的时候,不论坡度如何,这里都会是红叉
建立决策树实际上就是:找到变量,找到变量分割点,从而产生尽可能均一的子集。实际上,决策树作决策的过程,就是对这个过程的递归重复
信息增益
信息增益定义为父项熵减去分割父项后生成的子项的熵的加权平均
决策树算法会最大程度地提高信息增益,它通过这种方法来选择进行分割的特征。如果特征有多个可获取的不同值,这将帮助它找出在何处分割,它会尝试最大程度的提高信息增益