朴素贝叶斯---算法复现

朴素贝叶斯分类器

算法思路:

(1)计算先验概率和条件概率
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , . . . K P(Y=c_k)=\frac{\sum_{i=1}^NI(y_i = c_k)}{N},k=1,2,...K P(Y=ck)=Ni=1NI(yi=ck),k=1,2,...K
P ( X ( j ) = x ( j ) ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) , k = 1 , 2 , . . . K ; l = 1 , 2 , 3 , . . . S j ; j = 1 , 2 , . . . , N P(X^{(j)}=x^{(j)}|Y=c_k)=\frac{\sum_{i=1}^NI(x_i^{(j)}=a_{j_l},y_i = c_k)}{\sum_{i=1}^NI(y_i = c_k)},k=1,2,...K;l =1,2,3,...S_j;j=1,2,...,N P(X(j)=x(j)Y=ck)=i=1NI(yi=ck)i=1NI(xi(j)=ajl,yi=ck),k=1,2,...K;l=1,2,3,...Sj;j=1,2,...,N
(2)对于给定的x计算:
P ( Y = c k ) Π j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , . . . K P(Y =c_k)\Pi_{j=1}^nP(X^{(j)}=x^{(j)}|Y=c_k),k=1,2,...K P(Y=ck)Πj=1nP(X(j)=x(j)Y=ck),k=1,2,...K
(3)确定x的类别
y = a r g m a x c k P ( Y = c k ) Π j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) y =argmax_{c_k}P(Y =c_k)\Pi_{j=1}^nP(X^{(j)}=x^{(j)}|Y=c_k) y=argmaxckP(Y=ck)Πj=1nP(X(j)=x(j)Y=ck)

代码:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB, MultinomialNB#伯努利,多项式
from collections import Counter
import math

#data 鸢尾花数据集
def create_data():
    iris = load_iris()#导出鸢尾花属性的数据集
    df = pd.DataFrame(iris.data,columns=iris.feature_names)#pandas中的二维数组150*4,iris.data是鸢尾花的数据
    df['label'] = iris.target#label:0-149,target:0,1,2表示鸢尾花的类别
    df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
    data = np.array(df.iloc[:100,:])#取前100行的数据
    return data[:,:-1],data[:,-1]#前者是输出前n-1列,后者是输出最后一列
X, y = create_data()#X = data[:,:-1],y = data[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3)#test_size表示样本占比为train:test = 10:3
#print(X_test[0],y_test[0])
#特征的可能性假设为高斯分布
class NaiveBayes:
    def __init__(self):
        self.model = None
    #求期望
    def mean(self,X):
        return sum(X)/float(len(X))
    #求方差
    def stdev(self,X):
        return math.sqrt(sum([pow(x - self.mean(X),2) for x in X]) / float(len(X)))
    #求概率密度函数
    def gaussian_probability(self,X,mean,stdev):
        exponent = math.exp(-(math.pow(X - mean, 2) / (2 * math.pow(stdev,2))))
        return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent
    #处理X_train
    def summarize(self,train_data):
        summarise = [(self.mean(i),self.stdev(i)) for i in zip(*train_data)]#zip(*train_data)是将train_data转置
        return summarise
    #分类别求出期望和标准差
    def fit(self, X, y):
        labels = list(set(y))#set(y)对y去重,即:找出各个类别0,1,2
        data = {label: [] for label in labels}#字典,类:data
        for f, label in zip(X, y): #组成数据(4列)类别(一列)的元组,f是数据,label是类别
            data[label].append(f)#将数据和类别一一对应,{类别1:【数据1】,...,[数据i];类别2:【数据1】,...【数据n】,...}
        self.model = {
            label: self.summarize(value)
            for label, value in data.items()#data.items[('0',1,2,3,...),('1',1,2,,3...),('2',1,2,3...)]
        }#训练模型
        return 'gaussianNB train done!'
    #计算概率
    def calculate_probabilities(self,input_data):
        # summaries:{0.0: [(5.0, 0.37),(3.42, 0.40)], 1.0: [(5.8, 0.449),(2.7, 0.27)]}
        # input_data:[1.1, 2.2]
        probabilities = {}
        for label,value in self.model.items():
            probabilities[label] = 1
            for i in range(len(value)):
                mean,stdev = value[i]
                probabilities[label] = self.gaussian_probability(input_data[i],mean,stdev)
        return probabilities

    #类别
    def predict(self,X_test):
# {0.0: 2.9680340789325763e-27, 1.0: 3.5749783019849535e-26}类别:概率
        label = sorted(
            self.calculate_probabilities(X_test).items(),
            key = lambda x : x[-1])[-1][0]#key是用来进行比较的元素,即按照概率进行排序,[-1][0]就是最大概率的类别
        return label
    #分类正确率
    def score(self, X_test, y_test):
        right = 0
        for X, y in zip(X_test, y_test):#组成数据(4列)类别(一列)的元组,X是数据,f是类别
            label = self.predict(X)
            if label == y:
                right += 1
        return right / float(len(X_test))
if __name__ == '__main__':
    model = NaiveBayes()
    model.fit(X_train,y_train)
    print(model.predict([4.4, 3.2, 1.3, 0.2]))
    print(model.score(X_test,y_test))
    #调库
    clf = GaussianNB()
    clf.fit(X_train,y_train)
    print(clf.score(X_test,y_test))
    print(clf.predict([[4.4, 3.2, 1.3, 0.2]]))
    clf1 = BernoulliNB()
    clf2 = MultinomialNB()
    clf1.fit(X_train,y_train)
    clf2.fit(X_train, y_train)
    print(clf1.score(X_test, y_test))
    print(clf1.predict([[4.4, 3.2, 1.3, 0.2]]))
    print(clf2.score(X_test, y_test))
    print(clf2.predict([[4.4, 3.2, 1.3, 0.2]]))

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值