朴素贝叶斯

基础

贝叶斯概率是以18世纪的一位神学家托马斯.贝叶斯(Thomas Bayes)的名字命名。贝叶斯理论引入先验知识和逻辑推理来处理不确定命题。条件概率是贝叶斯理论的理论的基础

贝叶斯公式:

p(ci|x)=p(x|ci)p(ci)p(x)

在贝叶斯分类器中 ci 为类别 i x为特征。在朴素贝叶斯分类器你中,朴素:条件相互独立,将 x 展开为独立的特征,可以将p(x|c)记为 p(x1,x2,x3,...xn|ci) (也称为后验概率),对应 n 个特征,在独立的前提下p(x1,x2,x3,...xn|ci)=p(x1|ci)p(x2|ci)...p(xn|ci) p(x) 为总体分布,没有变化,只需要比较分子的大小即可。

  • 朴素贝叶斯分类器一般流程
    1. 采集各类型样本
    2. 分类别在各特征上进行概率统计,当取值为连续时默认为正太分布
    3. 输入待识别数据
    4. 比较后验概率大小,后验概率最大者即为对应的类别

注意事项

  • 在计算 p(x1|ci),p(x2|ci),...p(xn|ci) 时若其中一项为0,最后乘积也为0。为了避免这种影响,在频数统计中,可以将每一项初始化为1,分母初始化为2
  • 同样在计算 p(x1|ci),p(x2|ci),...p(xn|ci) 时,大部分分子非常小,会出现下溢出的情况,此时的解决方法是取对数。 ln(ab)=ln(a)+ln(b) , 该处理在代数意义上也不会有损失

示例代码

# -*- coding:utf-8 -*-
from numpy import *
def loadDataSet():
    # 词条切分集合
    postingList = [['my', 'dog', 'has', 'flea','problem', 'help', 'please'],
                   ['maybe', 'not', 'take', 'him','to', 'dog', 'park', 'stupid'],
                   ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                   ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                   ['mr', 'licks', 'ate', 'my', 'steak', 'how','to', 'stop', 'him'],
                   ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec = [0, 1, 0, 1, 0, 1]    #1 代表侮辱性文字, 0 代表正常言论
    return postingList, classVec

# 创建词汇表
def createVocabList(dataSet):
    vocabSet = set([])
    for document in dataSet:
        vocabSet = vocabSet | set(document) # 创建两个合集的并集
    return list(vocabSet)

def setOfWords2Vec(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] = 1
        else:
            print('the word: %s is not in my Vocabulary!' %word)
    return returnVec

def trainNB0(trainMatrix, trainCategory):
    numTrainDocs = len(trainMatrix)
    numWords = len(trainMatrix[0])
    pAbusive = sum(trainCategory)/float(numTrainDocs)
    p0Num = ones(numWords); p1Num = ones(numWords) # 为了避免出现其中一个概率值为0,最后的乘积为0
    p0Denom = 2.0; p1Denom = 2.0
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    # 避免下溢出,分子多小的情况
    p1Vect = log(p1Num/p1Denom)
    p0Vect = log(p0Num/p0Denom)
    return p0Vect, p1Vect, pAbusive

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    p1 = sum(vec2Classify*p1Vec) + log(pClass1)
    p0 = sum(vec2Classify*p0Vec) + log(1.0 - pClass1)
    if p1 > p0:
        return 1
    else:
        return 0
def testingNB():
    listOPosts,listClasses = loadDataSet()
    myVocabList = createVocabList(listOPosts)
    trainMat = []
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V, p1V, pAb = trainNB0(array(trainMat), array(listClasses))
    testEntry = ['love', 'my', 'dalmation']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb))
    testEntry = ['stupid', 'garbage']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as:', classifyNB(thisDoc, p0V, p1V, pAb))

# 朴素贝叶斯词袋模型
def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 1
    return returnVec

def textParse(bigString):
    import re
    relistOfTokens = re.split(r'\w*', bigString)
    return [tok.lower() for tok in relistOfTokens if len(tok) > 2]

def spamTest():
    docList = []; classList = []; fullText = []
    for i in range(1,26):
        print (i)
        wordList = textParse(open('email/spam/%d.txt' % i).read())
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append(1)
        wordList = textParse(open('email/ham/%d.txt' % i).read())
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append(0)
    vocabList = createVocabList(docList)

    # 随机构建训练集
    trainingSet = list(range(50)); testSet = []
    for i in range(10):
        randIndex = int(random.uniform(0, len(trainingSet)))
        testSet.append(trainingSet[randIndex])
        del(trainingSet[randIndex])
    trainMat = []; trainClasses = []
    for docIndex in trainingSet:
        trainMat.append(setOfWords2Vec(vocabList, docList[docIndex]))
        trainClasses.append(classList[docIndex])
    p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))
    errorCount = 0

    # 对测试集分类
    for docIndex in testSet:
        wordVector = setOfWords2Vec(vocabList, docList[docIndex])
        if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
            errorCount += 1
    print('the error rate is:', float(errorCount)/len(testSet))

算法特点

优点: 在数据较少的情况下仍然有效,对数据缺失不敏感
缺点: 对于输入数据的准备方式较为敏感
适用数据:标称型数据

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值