一般流程
1.收集数据: | 可以使用任何方法。本章使用RSS源。 |
2.准备数据: | 需要数值型或者布尔型数据。 |
3.分析数据: | 有大量特征时,绘制特征作用不大,此时使用直方图效果更好。 |
4.训练算法 | 计算不同的独立特征的条件概率。 |
5.测试算法 | 计算错误率 |
6.使用算法 | 一个常见的朴素贝叶斯应用是文档分类。可以在任意的分类场景中使用朴素贝叶斯分类器,不一定非要是文本。 |
伪代码
计算每个类别中的文档数目
对每篇训练文档:
对每个类别:
如果词条出现文档中->增加该词条的计算值
增加所有词条的计数值
对每个类别:
对每个词条:
将该词条的数目除以总词条数目得到条件概率
返回每个类别的条件概率
代码实现
from numpy import *
def loadDataSet():
postingList=[['my','dog','has','flea','problems','help','please'],
['maybe','not','take','him','to','dog','park','stupid'],
['my','dalmation','is','so','cute','I','love','him'],
['stop','posting','stupid','worthless','garbage'],
['mr','licks','ate','my','steak','how','to','stop','him'],
['quit','buying','worthless','dog','food','stupid']]
classVec=[0,1,0,1,0,1]
return postingList,classVec
def createVocabList(dataSet):
vocabSet=set([])
for document in dataSet:
vocabSet=vocabSet|set(document)
return list(vocabSet)
def setOfWords2Vec(vocabList,inputSet):
returnVec=[0]*len(vocabList)
for word in inputSet:
if word in vocabList:
returnVec[vocabList.index(word)]=1
else:
print "the word %s is not in my Vocabulary!" % word
return returnVec
def trainNB0(trainMatrix,trainCategory):
numTrainDocs = len(trainMatrix)
numWords = len(trainMatrix[0])
pAbusive = sum(trainCategory)/float(numTrainDocs)
p0Num = zeros(numWords);p1Num=zeros(numWords)
p0Denom = 0.0;p1Denom = 0.0
for i in range(numTrainDocs):
if trainCategory[i] == 1:
p1Num += trainMatrix[i]
p1Denom += sum(trainMatrix[i])
else:
p0Num += trainMatrix[i]
p0Denom += sum(trainMatrix[i])
p1Vect = p1Num/p1Denom
p0Vect = p0Num/p0Denom
return p0Vect,p1Vect,pAbusive
import bayes
#listOPosts,listClasses=bayes.loadDataSet()
#myVocabList=bayes.createVocabList(listOPosts)
#print myVocabList
#print bayes.setOfWords2Vec(myVocabList, listOPosts[0])
#print bayes.setOfWords2Vec(myVocabList, listOPosts[3])
listOPosts,listClasses=bayes.loadDataSet()
myVocabList=bayes.createVocabList(listOPosts)
trainMat=[]
for postinDoc in listOPosts:
trainMat.append(bayes.setOfWords2Vec(myVocabList, postinDoc))
p0V,p1V,pAb=bayes.trainNB0(trainMat, listClasses)
print pAb,"\n"
print p0V
测试算法
根据现实情况修改分类器。
1.零概率问题
利用贝叶斯分类器对文档进行分类时,要计算多个概率的乘积以获得文档属于某个类别的概率,即计算p(w0|1)p(w1|1)p(w2|1)。如果其中一个概率为0,那么最后的乘积也为0。
解决办法:为了降低这种影响,可以将所有单词的初始值为1,并将分母初始化为2。
修改bayes.py的如下代码
# p0Num = zeros(numWords);p1Num=zeros(numWords)
# p0Denom = 0.0;p1Denom = 0.0
p0Num = ones(numWords);p1Num=ones(numWords)
p0Denom = 2.0;p1Denom = 2.0
2.下溢出问题
由于存在太多很小的数,当计算乘积p(w0|1)p(w1|1)p(w2|1)...p(wn|n) ,由于大部分因子都非常小,所以会出现下溢出或者计算结果不正确。
解决办法:对乘积取自然对数,通过求对数可以避免下溢出或者浮点数舍入导致的错误。同时,采用自然对数进行处理不会有任何损失。
修改bayes.py的如下代码
# p1Vect = p1Num/p1Denom
# p0Vect = p0Num/p0Denom
p1Vect = log(p1Num/p1Denom)
p0Vect = log(p0Num/p0Denom)
示例一:邮件过滤
import re
from bayes import createVocabList, setOfWords2Vec, trainNB0, classifyNB
from scipy import random
from numpy import array
def textParse(bigString):
listOfTokens=re.split(r'\W*',bigString)
return [tok.lower() for tok in listOfTokens if len(tok) >2]
def spamTest():
docList=[];classList=[];fullText=[]
for i in range(1,26):
wordList=textParse(open('E:\Python\machinelearninginaction\Ch04\email\spam/%d.txt' % i).read())
docList.append(wordList)
fullText.extend(wordList)
classList.append(1)
wordList=textParse(open('E:\Python\machinelearninginaction\Ch04\email\ham/%d.txt' % i).read())
docList.append(wordList)
fullText.extend(wordList)
classList.append(0)
vocabList=createVocabList(docList)
trainingSet = range(50);testSet=[]
for i in range(10):
randIndex = int(random.uniform(0,len(trainingSet)))
testSet.append(trainingSet[randIndex])
del(trainingSet[randIndex])
trainMax=[];trainClasses=[]
for docIndex in trainingSet:
trainMax.append(setOfWords2Vec(vocabList,docList[docIndex]))
trainClasses.append(classList[docIndex])
p0V,p1V,pSpam=trainNB0(array(trainMax),array(trainClasses))
errorCount = 0
for docIndex in testSet:
wordVector=setOfWords2Vec(vocabList, docList[docIndex])
if classifyNB(array(wordVector),p0V,p1V,pSpam) != classList[docIndex]:
errorCount+=1
print "classification error",docList[docIndex]
print 'the error rate is: ',float(errorCount)/len(testSet)