朴素贝叶斯进行PM2.5检测

本文介绍了一个朴素贝叶斯分类器的Python实现,包括训练和预测两个关键步骤。首先,通过贝叶斯估计计算先验概率和平滑处理后的条件概率。接着,利用训练好的模型对新样本进行预测,通过比较后验概率确定样本所属类别。在测试阶段,计算了分类器的准确率并展示了部分预测结果。
摘要由CSDN通过智能技术生成
import numpy as np
from collections import Counter
from sklearn import datasets
 
 
class NaiveBayes:
    def __init__(self, lamb=1):
        self.lamb = lamb  # 贝叶斯估计的参数
        self.prior = dict()  # 存储先验概率
        self.conditional = dict()  # 存储条件概率
 
    def training(self, features, target):
        """
        根据朴素贝叶斯算法原理,使用 贝叶斯估计 计算先验概率和条件概率
        特征集集为离散型数据,预测类别为多元.  数据集格式为np.array
        :param features: 特征集m*n,m为样本数,n为特征数
        :param target: 标签集m*1
        :return: 不返回任何值,更新成员变量
        """
        features = np.array(features)
        target = np.array(target).reshape(features.shape[0], 1)
        m, n = features.shape
        labels = Counter(target.flatten().tolist())  # 计算各类别的样本个数
        k = len(labels.keys())  # 类别数
        for label, amount in labels.items():
            self.prior[label] = (amount + self.lamb) / (m + k * self.lamb)  # 计算平滑处理后的先验概率
        for feature in range(n):  # 遍历每个特征
            self.conditional[feature] = {}
            values = np.unique(features[:, feature])
            for value in values:  # 遍历每个特征值
                self.conditional[feature][value] = {}
                for label, amount in labels.items():  # 遍历每种类别
                    feature_label = features[target[:, 0] == label, :]  # 截取该类别的数据集
                    c_label = Counter(feature_label[:, feature].flatten().tolist())  # 计算该类别下各特征值出现的次数
                    self.conditional[feature][value][label] = (c_label.get(value, 0) + self.lamb) / \
                                                              (amount + len(values) * self.lamb)  # 计算平滑处理后的条件概率
        return
 
    def predict(self, features):
        """预测单个样本"""
        best_poster, best_label = -np.inf, -1
        for label in self.prior:
            poster = np.log(self.prior[label])  # 初始化后验概率为先验概率,同时把连乘换成取对数相加,防止下溢(即太多小于1的数相乘,结果会变成0)
            for feature in range(features.shape[0]):
                poster += np.log(self.conditional[feature][features[feature]][label])
            if poster > best_poster:  # 获取后验概率最大的类别
                best_poster = poster
                best_label = label
        return best_label
 
 
def test():
    
    
    dataset = np.asarray([[-16,-4,1020,1.79,0,0,1],
[-15,-4,1020,2.68,0,0,2],
[-11,-5,1021,3.57,0,0,2],
 [-7,-5,1022,5.36,1,0,2],
  [-7,-5,1022,6.25,2,0,1],
  [-7,-6,1022,7.14,3,0,1],                     
   [-7,-6,1023,8.93,4,0,1],
   [-7,-5,1024,10.72,0,0,1],
    [-8,-6,1024,12.51,0,0,1],
   [-7,-5,1025,14.3,0,0,1],
    [-7,-5,1026,17.43,1,0,2],
     [-8,-5,1026,20.56,0,0,2],
     [-8,-5,1026,23.69,0,0,2],
     [-8,-5,1025,27.71,0,0,2],
      [-9,-5,1025,31.73,0,0,2],
     [-9,-5,1025,35.75,0,0,2],
     [-9,-5,1026,37.54,0,0,2],
     [-8,-5,1027,39.33,0,0,2],
     [-8,-5,1027,42.46,0,0,2],   
      [-8,-5,1028,44.25,0,0,2],
       [-7,-5,1028,46.04,0,0,2],
       [-7,-5,1027,49.17,1,0,2],
       [-8,-6,1028,52.30,2,0,1],
       [-8,-6,1027,55.43,3,0,1], 
        [-7,-6,1027,58.56,4,0,1],
        [-8,-6,1026,61.69,5,0,0],
        [-8,-7,1026,65.71,6,0,0],
        [-8,-7,1025,68.84,7,0,0],
        [-8,-7,1024,72.86,8,0,0],              
        [-9,-8,1024,76.88,9,0,1]])
    
    print(dataset)
    np.random.shuffle(dataset)  # 打乱数据
    features = dataset[:, :-1]
    target = dataset[:, -1:]
    nb = NaiveBayes()
    nb.training(features, target)
    prediction = []
    for features in features:
        prediction.append(nb.predict(features))
    correct = [1 if a == b else 0 for a, b in zip(prediction, target)]
    print(correct.count(1) / len(correct))  # 计算准确率
    for(a,b) in zip (prediction,target):
     print("预测结果")
     print (a)
     print("标签")
     print (b)
 
test()


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值