高斯朴素贝叶斯分类spambase数据集+五折交叉验证(附python代码实现)

代码

import math
import numpy as np
import pandas as pd
from sklearn.model_selection import KFold
      
spambase=pd.read_csv(r"C:\Users\hina\Desktop\spambase\spambase.data",header=None)#读取文件,没有表头要加header=None
spambasedata = np.array(spambase)
np.random.shuffle(spambasedata)#打乱顺序,数据集前半部分label全为1,后半部分全为0,如果不打乱顺序的话,会导致测试集全为0
indexs=np.arange(spambasedata.shape[0])
#划分五折交叉验证
kf=KFold(n_splits=5,shuffle=False)
data=[]
for train_index , test_index in kf.split(indexs):  # 调用split方法切分数据
    data.append((spambasedata[train_index], spambasedata[test_index]))

n=0
for train, test in data:#[feature1,feature2,...,label]*length
    n+=1
    print(f"第{n}折交叉验证")
    #按照标签将训练集分类
    tarin_lable0=train[train[:,-1]==0]
    p0=tarin_lable0.shape[0]/train.shape[0]#P(lable=0)
    tarin_lable1=train[train[:,-1]==1]
    p1=tarin_lable1.shape[0]/train.shape[0]#P(lable=1)
    print("训练集中01分别占的比例(概率)为",p0,p1)

    #计算每一列的均值和方差
    mean0=np.mean(tarin_lable0[:,:-1],axis=0)
    std0=np.std(tarin_lable0[:,:-1],axis=0)+1e-10
    mean1=np.mean(tarin_lable1[:,:-1],axis=0)
    std1=np.std(tarin_lable1[:,:-1],axis=0)+1e-10#加上一个很小平滑值,防止方差为0

    #计算每一列的概率密度函数
    def gaussian_probability(x, mean, stdev):
        exponent = math.exp(-(math.pow(x - mean, 2) / (2 * math.pow(stdev, 2))))
        return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent

    #计算测试集的预测值
    pred=[]
    for data in test:#[feature1,feature2,...,label]
        p0_x=1
        p1_x=1
        for i in range(len(data)-1):
            p0_x*=gaussian_probability(data[i],mean0[i],std0[i])
            p1_x*=gaussian_probability(data[i],mean1[i],std1[i])
        if p0_x*p0>p1_x*p1:
            pred.append(0)
        else:
            pred.append(1)
    print("前20个预测结果与真实值比较:","\n",pred[:20],"\n",test[:,-1][:20])
    print("准确率:",(pred==test[:,-1]).sum()/test.shape[0])

理论

https://blog.csdn.net/u013066730/article/details/125821190

单个特征对应概率
在这里插入图片描述
对应代码

    def gaussian_probability(x, mean, stdev):
        exponent = math.exp(-(math.pow(x - mean, 2) / (2 * math.pow(stdev, 2))))
        return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent

将所有特征的概率相乘,除以相应类别的概率,即为预测的分为该类别的概率
在这里插入图片描述

        p0_x=1
        p1_x=1
        for i in range(len(data)-1):
            p0_x*=gaussian_probability(data[i],mean0[i],std0[i])
            p1_x*=gaussian_probability(data[i],mean1[i],std1[i])

取所有类别概率最大值即为预测结果

求方差时注意要给一个平滑值,否则方差为0会出现除0错误

    #计算每一列的均值和方差
    mean0=np.mean(tarin_lable0[:,:-1],axis=0)
    std0=np.std(tarin_lable0[:,:-1],axis=0)+1e-10
    mean1=np.mean(tarin_lable1[:,:-1],axis=0)
    std1=np.std(tarin_lable1[:,:-1],axis=0)+1e-10#加上一个很小平滑值,防止方差为0

代码输出

1折交叉验证
训练集中01分别占的比例(概率)为 0.6021739130434782 0.397826086956521720个预测结果与真实值比较: 
 [0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1] 
 [0. 0. 0. 1. 1. 1. 1. 0. 1. 0. 1. 1. 0. 0. 0. 0. 0. 1. 1. 1.]
准确率: 0.82410423452768732折交叉验证
训练集中01分别占的比例(概率)为 0.6085302906818799 0.391469709318120120个预测结果与真实值比较: 
 [1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0] 
 [1. 0. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0.]
准确率: 0.81304347826086953折交叉验证
训练集中01分别占的比例(概率)为 0.6047269763651182 0.395273023634881820个预测结果与真实值比较: 
 [1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0] 
 [1. 1. 1. 1. 0. 1. 0. 0. 0. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0.]
准确率: 0.81086956521739134折交叉验证
训练集中01分别占的比例(概率)为 0.6052703069817984 0.394729693018201620个预测结果与真实值比较: 
 [1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0] 
 [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0. 1. 1. 1. 0.]
准确率: 0.81413043478260875折交叉验证
训练集中01分别占的比例(概率)为 0.6090736212985601 0.390926378701439820个预测结果与真实值比较: 
 [1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1] 
 [1. 0. 1. 0. 0. 0. 1. 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 1. 1.]
准确率: 0.8076086956521739

(全部预测1准确率也有0.6,贝叶斯预测准确率只有0.8,还是得用深度学习)

  • 6
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
下面是一个使用Python实现朴素贝叶斯分类器对MNIST数据集进行分类的简单示例: 首先,需要使用Python的NumPy库和Scikit-learn库加载MNIST数据集: ```python from sklearn.datasets import fetch_openml import numpy as np mnist = fetch_openml('mnist_784') X = mnist.data.astype('float32') y = mnist.target.astype('int32') ``` 然后,可以将数据集拆分为训练集和测试集: ```python from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` 接下来,可以使用Scikit-learn库中的多项式朴素贝叶斯分类器来训练模型: ```python from sklearn.naive_bayes import MultinomialNB nb = MultinomialNB() nb.fit(X_train, y_train) ``` 最后,可以使用测试集评估模型的性能: ```python from sklearn.metrics import accuracy_score y_pred = nb.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` 完整代码如下: ```python from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score import numpy as np # 加载数据集 mnist = fetch_openml('mnist_784') X = mnist.data.astype('float32') y = mnist.target.astype('int32') # 拆分数据集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # 训练模型 nb = MultinomialNB() nb.fit(X_train, y_train) # 评估模型 y_pred = nb.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` 注意,上述示例中使用的是多项式朴素贝叶斯分类器,而不是高斯朴素贝叶斯分类器,因为像素值是离散的。如果将像素值视为连续变量,则应该使用高斯朴素贝叶斯分类器。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值