Algorithm: Decision Tree, Entropy, Information Gain and Continues features

决策树是随机森林的基础,用于显示仅包含条件控制语句的算法。通过历史样本和经验构建不同决策树,并用信息熵和信息增益来评估性能。在处理高维输入数据时,可能面临NP-hard问题,采用贪婪算法求解。文章讨论了如何选择特征,防止过拟合,以及如何处理连续值特征,并提供了决策树在分类和回归问题上的应用实例。
摘要由CSDN通过智能技术生成

Deciesion Tree is the foundation of the random forest.

A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.

Base on the history samples and experiences, we can build different decision tree(model), we defined a method to evaluate the performance of the decision tree.

Decision boundary of Decision Tree

The structure of the decision is complex. Structured Prediction

NP-hard problem

We can't solve a problem within the polynomial complexity called NP-hard

We use the approximate approach to solve the problem such as greedy algorithm.

Information Gain

Type 1 of Decision Tree

Type two of the decision tree

we change the order of the two features to obtain a new decision tree.

if the dimension of the input data is large, the number associate decision trees is very large!

 

We use the greedy algorithm to obtain the decision tree model.

The first step is to choose the root

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值