一.基本概念
朴素贝叶斯算法的工作原理主要是概率论和数理统计
通过属性对分类的影响程度,所展现不同的结果
二.简单运用
import numpy as np
X= np.array([
[0,1,0,1],
[1,1,1,0],
[0,1,1,0],
[0,0,0,1],
[0,1,1,0],
[0,1,0,1],
[1,0,0,1]]
)
y= np.array([0,1,1,0,1,0,0])
counts = {}
for label in np.unique(y):
counts[label] = X[y == label].sum(axis = 0)
print("feature count:\n{}".format(counts))
其结果为:
0:[1,2,0,4],
1:[1,3,3,0]
对y分类为0,1
求出每个分类中相应X中的各个属性为1的和
这就可以求出当y为1和0时,X中各个属性的影响程度
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
clf = BernoulliNB()
clf.fit(X,y)
Next_Day = [[0,0,1,0]]
pre = clf.predict(Next_Day)
if pre == [1]:
print('下雨了')
else:
print('放心,是晴天')
print('下雨概率:{}'.format(clf.predict_proba(Next_Day)[0][1]))
print('不下雨概率:{}\n'.format(clf.predict_proba(Next_Day)[0][0]))
将y的0和1比喻晴天和下雨,X的属性表示影响天气的因素
则可以通过朴素贝叶斯算法推导出下雨的概率
三.实例
##肿瘤良性恶性
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.keys())
X,y = cancer['data'],cancer['target']
gnb = GaussianNB()
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=8)
gnb.fit(X_train,y_train)
print('得分:{:.3f}'.format(gnb.score(X_test,y_test)))
pre = gnb.predict([X[123]])
print('分类为:{}'.format(cancer['target_names'][pre][0]))
print('实际分类为:{}'.format(cancer['target_names'][y[123]]))
p = gnb.predict_proba([X[123]])[0][1]*100
print('良心肿瘤概率:%{}'.format(p))
通过内置数据模块load_breast_cancer
推测出患者是否患有癌症。
数据格式使用Bunch对象。