Decision Trees - 决策树


Design Decision Tree Classifier
-Picking the root node
-Recursively branching


qPicking the root node

-The goal is to have the resulting decision tree as small as possible
 (决策树要尽量的小)
-The main decision in the algorithm is the selection of the next attribute to condition on (start from the root node).
- We want attributes that split the examples to sets that are relatively pure in one label; this way we are closer to a leaf node.
(产生的孩子节点要尽量的纯也就是尽量只包含同一类别,这样跟更接近叶子节点,当节点中只包含同一类别的样本时此节点为叶子节点,不再分裂)
-The most popular heuristics is based on information gain, originated with the ID3 system of Quinlan.
(节点的分裂要依据信息增益(information gain),选择导致信息增益值比较大的属性进行分裂。)


Entropy(熵)
- Entropy measures the impurity of S

Information Gain(信息增益)

-Gain (S, A) = expected reduction in entropy due to sorting on A

-Values (A) is the set of all possible values for attribute A, Sv is the subset of S which attribute A has value v, |S| and | Sv | represent the number of samples in set S and set Sv respectively
- Gain(S,A) is the expected reduction in entropy caused by knowing the value of attribute A.

Example

Play Tennis Example








  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
下面是一个使用RDD-based API scala语言代码输出预测结果的Decision Trees示例: ```scala import org.apache.spark.mllib.tree.DecisionTree import org.apache.spark.mllib.tree.model.DecisionTreeModel import org.apache.spark.mllib.util.MLUtils // 加载训练数据 val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt") // 将数据集划分为训练集和测试集 val splits = data.randomSplit(Array(0.7, 0.3)) val (trainingData, testData) = (splits(0), splits(1)) // 训练一个决策树模型 val numClasses = 2 val categoricalFeaturesInfo = Map[Int, Int]() val impurity = "gini" val maxDepth = 5 val maxBins = 32 val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins) // 将测试数据集输入到模型中并进行预测 val labelAndPreds = testData.map { point => val prediction = model.predict(point.features) (point.label, prediction) } // 计算模型在测试集上的准确率 val testErr = labelAndPreds.filter(r => r._1 != r._2).count().toDouble / testData.count() println("Test Error = " + testErr) println("Learned classification tree model:\n" + model.toDebugString) // 输出预测结果 println("Prediction Results:") labelAndPreds.foreach(println) ``` 在这个示例中,我们首先加载了一个样本数据集,然后将其划分为训练集和测试集。然后使用训练集训练一个决策树模型,并将测试数据集输入模型进行预测。最后,我们输出预测结果并计算模型在测试集上的准确率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值