Information Gain 计算 python实现



from math import log

def I(*args):
    total = sum(args) + 0.0
    result = 0.0
    for i in args:
        if i == 0:
            result += 0
        else:
            result += i / total * log( i / total, 2)
    return -result

#num表示分类的个数
def E(num, *args):
    if len(args) % num != 0:
        print "Error len(args)"
    result = 0.0
    total = sum(args)
    for x in xrange(len(args) / num):
        k = x * num
        total_up = 0.0 + sum(args[k:k + num])
        result += total_up / total * I(*args[k:k + num])
    return result

def Gain(i, e):
    return i - e

#取个别名
i = I
e = E
g = Gain

if __name__ == "__main__":
    #for example
    print i(9, 5)
    print e(2, 2, 3, 4, 0, 3, 2)




  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ID3(Iterative Dichotomiser 3)是一种决策树算法,用于分类和预测。下面是使用Python实现ID3算法的基本步骤: 1. 导入必要的库和模块: ```python import pandas as pd import numpy as np from math import log2 ``` 2. 定义一个函数,用于计算信息熵: ```python def entropy(data): labels = data.iloc[:,-1] unique_labels = labels.unique() n = len(labels) entropy = 0 for label in unique_labels: p = len(labels[labels==label]) / n entropy += -p * log2(p) return entropy ``` 3. 定义一个函数,用于计算信息增益: ```python def information_gain(data, feature): entropy_before = entropy(data) unique_values = data[feature].unique() n = len(data) entropy_after = 0 for value in unique_values: subset = data[data[feature]==value] p = len(subset) / n entropy_after += p * entropy(subset) return entropy_before - entropy_after ``` 4. 定义一个函数,用于选择最佳的特征: ```python def choose_best_feature(data): features = data.columns[:-1] best_feature = None best_information_gain = -1 for feature in features: ig = information_gain(data, feature) if ig > best_information_gain: best_feature = feature best_information_gain = ig return best_feature ``` 5. 定义一个递归函数,用于构建决策树: ```python def build_tree(data): labels = data.iloc[:, -1] if len(labels.unique()) == 1: return labels.iloc[0] if len(data.columns) == 1: return labels.mode()[0] best_feature = choose_best_feature(data) tree = {best_feature:{}} unique_values = data[best_feature].unique() for value in unique_values: subset = data[data[best_feature]==value].drop(best_feature, axis=1) subtree = build_tree(subset) tree[best_feature][value] = subtree return tree ``` 6. 最后,导入数据并生成决策树: ```python data = pd.read_csv('data.csv') tree = build_tree(data) print(tree) ``` 这就是使用Python实现ID3算法的基本步骤。当然,这只是一个简单的实现,还可以根据实际情况进行优化。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值