ML:朴素贝叶斯

1. 基于贝叶斯决策理论的分类方法

  1. 优点:在数据较少的情况下仍然有效,可以处理多类别问题
  2. 缺点:对于输入数据的准备方式较敏感
  3. 适用数据类型:标称型数据
  4. 核心思想:选择具有最高概率的决策。如 p 1 p_1 p1 代表点 ( x , y ) (x, y) (x,y) 属于类别 1 的概率, p 2 p_2 p2 代表属于类别 2 的概率,若 p 1 > p 2 p_1>p_2 p1>p2 ,那么推测该点为类别 1,反之为类别 2
  5. 朴素:特征之间相互独立,或者每个特征同等重要

2. 条件概率

  1. 在 B 发生的情况下,A 发生的概率: p ( A ∣ B ) = p ( A B ) p ( B ) p(A|B) = \frac{p(AB)}{p(B)} p(AB)=p(B)p(AB)
  2. 贝叶斯准则: P ( A ∣ B ) = p ( B ∣ A ) p ( A ) p ( B ) P(A|B) = \frac{p(B|A)p(A)}{p(B)} P(AB)=p(B)p(BA)p(A)

3. 使用条件概率来分类

  1. 对于向量 w \bf{w} w,该向量属于 c i c_i ci 的概率: p ( c i ∣ w ) = p ( w ∣ c i ) p ( c i ) p ( w ) p(c_i | {\bf{w}}) = \frac{p( {\bf{w}} | c_i)p(c_i)}{p({\bf{w}})} p(ciw)=p(w)p(wci)p(ci)
  2. 如果 p ( c 1 ∣ w ) > p ( c 2 ∣ w ) p(c_1|{\bf{w}}) > p(c_2|{\bf{w}}) p(c1w)>p(c2w),那么属于类别 c 1 c_1 c1,如果 p ( c 1 ∣ w ) < p ( c 2 ∣ w ) p(c_1|{\bf{w}}) < p(c_2|{\bf{w}}) p(c1w)<p(c2w),那么属于类别 c 2 c_2 c2
  3. 对于朴素贝叶斯,假设各特征之间相互独立,则 p ( w ∣ c i ) = p ( w 1 ∣ c i ) p ( w 2 ∣ c i ) . . . p ( w n ∣ c i ) p({\bf{w}}|c_i) = p(w_1|c_i)p(w_2|c_i)...p(w_n|c_i) p(wci)=p(w1ci)p(w2ci)...p(wnci)

4. 使用 Python 进行文本分类

  • 以在线社区的留言板为例,分为侮辱类和非侮辱类,分别使用 1 和 0 表示
  1. 准备数据:从文本中构建词向量
'''创建实验样本,返回的第一个变量是进行词条切分后的文档集合,第二个变量是类别标签'''
def loadDataSet():
    postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                 ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                 ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                 ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                 ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec = [0,1,0,1,0,1]    #1 is abusive, 0 not
    return postingList,classVec

'''创建一个包含在所有文档中出现的不重复的词列表'''
def createVocabList(dataSet):    
    vocabSet = set([])
    for document in dataSet:
        vocabSet = vocabSet | set(document)    # 求集合的并集
    return list(vocabSet)

'''输入为词汇表和文档,输出文档向量,表示词汇表的单词在文档中是否出现'''
def setOfWords2Vec(vocabList, inputSet):    # 
    returnVec = [0] * len(vocabList)    # 初始化输出
    for word in inputSet:    # 遍历文档
        if word in vocabList:    # 判断词汇是否在词汇表中,是则将输出对应值设为1
            returnVec[vocabList.index(word)] = 1
        else:
            print('the word: %s is not in my Vocabulary!' % word)
    return returnVec

[IN]: listOPosts, listClasses = loadDataSet()
[IN]: myVocabList = createVocabList(listOPosts)
[IN]: print(myVocabList)
[OUT]: ['garbage', 'not', 'steak', 'is', 'dog', 'how', 'my', 'food', 'to', 'licks', 'mr', 
       'buying', 'so', 'problems', 'park', 'stop', 'ate', 'help', 'stupid', 'love', 'flea', 
       'worthless', 'take', 'posting', 'has', 'cute', 'dalmation', 'quit', 'please', 'him', 'maybe', 'I']

[IN]: print(setOfWords2Vec(myVocabList, listOPosts[0]))
[OUT]: [0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
[IN]: print(setOfWords2Vec(myVocabList, listOPosts[3]))
[OUT]: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
  1. 训练算法:从词向量计算概率
'''输入为文档矩阵,标签向量,输出为p(w|c0),p(w|c1),p(c1)'''

import numpy as np
def trainNB0(trainMatrix, trainCategory):
    numTrainDocs = len(trainMatrix)    # 文档个数
    numWords = len(trainMatrix[0])    # 单词个数
    pAbusive = sum(trainCategory) / float(numTrainDocs)    # 计算 p(c1)
    p0Num = np.ones(numWords)    # 初始化单词出现次数为1,避免一个单词出现概率为0,导致后面乘积也为0
    p1Num = np.ones(numWords)
    p0Denom = 2    # 初始化所有出现单词个数为2
    p1Denom = 2
    for i in range(numTrainDocs):   # 遍历每个文档
        if trainCategory[i] == 1:    # 判断是否侮辱性
            p1Num += trainMatrix[i]    # 向量相加,每个单词出现次数
            p1Denom += sum(trainMatrix[i])    # 所有出现单词的个数
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = np.log(p1Num / p1Denom)    # p(w|c1),使用 log 避免很多数值太小的乘数相乘导致结果为四舍五入为0
    p0Vect = np.log(p0Num / p0Denom)    # p(w|c0),log 可以将乘法变为加法
    return p0Vect, p1Vect, pAbusive

[IN]: trainMat = []
[IN]: for postinDoc in listOPosts:
    	  trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
[IN]: p0V, p1V, pAb = trainNB0(trainMat, listClasses)

[IN]: pAb
[OUT]: 0.5

[IN]: p0V
[OUT]: array([-3.25809654, -3.25809654, -2.56494936, -2.56494936, -2.56494936,
		       -2.56494936, -1.87180218, -3.25809654, -2.56494936, -2.56494936,
		       -2.56494936, -3.25809654, -2.56494936, -2.56494936, -3.25809654,
		       -2.56494936, -2.56494936, -2.56494936, -3.25809654, -2.56494936,
		       -2.56494936, -3.25809654, -3.25809654, -3.25809654, -2.56494936,
		       -2.56494936, -2.56494936, -3.25809654, -2.56494936, -2.15948425,
		       -3.25809654, -2.56494936])
       
[IN]: p1V
[OUT]: array([-2.35137526, -2.35137526, -3.04452244, -3.04452244, -1.94591015,
		       -3.04452244, -3.04452244, -2.35137526, -2.35137526, -3.04452244,
		       -3.04452244, -2.35137526, -3.04452244, -3.04452244, -2.35137526,
		       -2.35137526, -3.04452244, -3.04452244, -1.65822808, -3.04452244,
		       -3.04452244, -1.94591015, -2.35137526, -2.35137526, -3.04452244,
		       -3.04452244, -3.04452244, -2.35137526, -3.04452244, -2.35137526,
		       -2.35137526, -3.04452244])
  1. 构建完整的分类器
import numpy as np


'''分类器'''
def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    p1 = sum(vec2Classify * p1Vec) + np.log(pClass1)    # p(w|c1)p(c1)
    p0 = sum(vec2Classify * p0Vec) + np.log(1.0 - pClass1)    # p(w|c0)p(c0)
    if p1 > p0:
        return 1
    else:
        return 0
    
'''测试'''
def testingNB():
    listOPosts, listClasses = loadDataSet()
    myVocabList = createVocabList(listOPosts)
    trainMat = []
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V, p1V, pAb = trainNB0(np.array(trainMat), np.array(listClasses))
    testEntry = ['love', 'my', 'dog']
    thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb))
    testEntry = ['stupid', 'garbage']
    thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb))

[IN]: testingNB()
[OUT]: ['love', 'my', 'dog'] classified as:  0
[OUT]: ['stupid', 'garbage'] classified as:  1
  1. 文档词袋模型:之前只是词集,判断单词出现与否,词袋模型可以计算单词出现了多少次
def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = [0] * len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 1
    return returnVec

5. 练习:使用朴素贝叶斯过滤垃圾邮件

  1. 解析文本,提取单词:
def textParse(bigString):
    import re
    listOfTokens = re.split(r'\W*', bigString)    # 分隔单词,并且过滤
    return [tok.lower() for tok in listOfTokens if len(tok) > 2]    # 返回长度大于 2 的单词,并且小写
  1. 使用朴素贝叶斯进行交叉验证:
def spamTest():
    import numpy as np
    import random
    docList = []
    classList = []
    fullText = []
    for i in range(1, 26):
        wordList = textParse(open('Ch04/email/spam/%d.txt' % i, encoding='ISO-8859-1').read())    # 导入并解析文本文件
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append(1)
        wordList = textParse(open('Ch04/email/ham/%d.txt' % i, encoding='ISO-8859-1').read())
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append(0)
    vocabList = createVocabList(docList)    # 得到所有不重复单词词表
    
    trainingSet = list(range(50))
    testSet = []
    for i in range(10):    # 随机选取 10 个文件(得到的是索引值)
        randIndex = int(random.uniform(0, len(trainingSet)))
        testSet.append(trainingSet[randIndex])
        del (trainingSet[randIndex])
    
    trainMat = []    # 训练集矩阵
    trainClasses = []    # 训练集标签
    for docIndex in trainingSet:
        trainMat.append(setOfWords2Vec(vocabList, docList[docIndex]))
        trainClasses.append(classList[docIndex])
    p0V, p1V, pSpam = trainNB0(np.array(trainMat), np.array(trainClasses))    # 计算三项概率值
    
    errorCount = 0    # 错误数初始化
    for docIndex in testSet:
        wordVector = setOfWords2Vec(vocabList, docList[docIndex])
        if classifyNB(np.array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:    # 判断是否分类正确
            errorCount += 1
    print('the error rate is: ', float(errorCount) / len(testSet))

[IN]: for i in range(10):
          spamTest()
[OUT]: the error rate is:  0.1
       the error rate is:  0.0
       the error rate is:  0.1
       the error rate is:  0.0
       the error rate is:  0.0
       the error rate is:  0.1
       the error rate is:  0.1
       the error rate is:  0.1
       the error rate is:  0.1
       the error rate is:  0.2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值