学了贝叶斯以后,不用SKlearn现成的包,基于numpy自己实现了一下高斯贝叶斯算法。可以按照顺序把代码贴进去,自己跑一下试试。
导入需要的包
import time #调用时间,显示算法运行时间
import os
import math
import numpy as np
import scipy as sp
import pandas as pd
# 使np矩阵不显示科学计数
np.set_printoptions(suppress=True)
主体Naive Bayes代码
class Gaussian(object):
"""
implements Gaussian Bayes classifier algorithms. These are supervised learning methods based on applying Bayes' theorem with strong (naive) feature independence assumptions.
function1-fit: Fit the model by Gaussian Bayes classifier.
Parameters
----------
X_train: train dataset
y_train: label of X_train
----------
function2-predict: return x_test's classification result based on Gaussian Bayes classifier.
Parameters
----------
y : x_test
----------
"""
def __init__(self):
pass
def fit(self,X_train,y_train):
self._data_with_label=X_train.copy()
self._y_train=pd.DataFrame(y_train.values,columns=['label'])
self._data_with_label['label']=y_train.values
self.mean_mat= self._data_with_label.groupby("label").mean()
self.var_mat=self._data_with_label.groupby("label").var()
self.prior_rate=self.__Priori()
return self
def predict(self,X_test):
pred=[ self.__Condition_formula(self.mean_mat,self.var_mat,row )*self.prior_rate for row in X_test.values ] # get the
class_result=np.argmax(pred, axis=1) # get the max
return class_result
#Priori probability
def __Priori(self):
la = self._y_train['label'].value_counts().sort_index()
prior_rate=np.array([ i /sum(la) for i in la])
return prior_rate
#Gaussian Bayes condition formula
def __Condition_formula(self,mu,sigma2,row):
P_mat=1/np.sqrt(2*math.pi*sigma2)*np.exp(-(row-mu)**2/(2*sigma2))
P_mat=pd.DataFrame(P_mat).prod(axis=1)
return P_mat
缺点:循环较多,性能不够优化
找个数据集测试一下:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
iris.target=pd.DataFrame(iris.target)
iris.data=pd.DataFrame(iris.data)
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=1)
np.set_printoptions(suppress=True)
接下来是用自己写的高斯贝叶斯算法测试:
from sklearn.metrics import accuracy_score
start_time = time.time()
NB = Gaussian()
NB.fit(X_train, y_train)
y_train_NB = NB.predict(X_train)
y_test_NB = NB.predict(X_test)
print("Use custom Gaussian Naive Bayes algorithm\naccuracy on train set: ",accuracy_score(y_train,y_train_NB ),"\naccuracy on test set: ",
accuracy_score(y_test,y_test_NB))
print("--- %s seconds ---" % (time.time() - start_time))
可以看到结果有90+准确率。
再用sklearn自带的GaussianNB包对比一下:
from sklearn.naive_bayes import GaussianNB
start_time = time.time()
NB2 = GaussianNB()
NB2.fit(X_train, y_train.values.ravel())
y_train_NB2 = NB2.predict(X_train)
y_test_NB2 = NB2.predict(X_test)
print("Use sklearn Gaussian Naive Bayes algorithm\naccuracy on train set: ",accuracy_score(y_train,y_train_NB2 ),"\naccuracy on test set: ",
accuracy_score(y_test,y_test_NB2))
print("--- %s seconds ---" % (time.time() - start_time))
准确率和前面手写的Gaussian Naive Bayes算法是一样的,但是速度快了很多。大概是因为自定义算法内部循环较多的缘故,如换成矩阵运算应该会优化速度。
最后用混淆矩阵看一下测试集上的分类效果:
import matplotlib.pyplot as plt
import seaborn as sns
con_matrix = pd.crosstab(pd.Series(y_test.values.flatten(), name='Actual' ),pd.Series(y_test_NB, name='Predicted'))
plt.title("Test set Confusion Matrix on Gaussian Naive Bayes")
sns.heatmap(con_matrix, cmap="Blues", annot=True, fmt='g')
plt.show()
算法主体部分代码放到GitHub上了,有兴趣欢迎关注:
https://github.com/JuneYaooo/ml-algorithms