《机器学习实战》第四章 4.1-4.5 朴素贝叶斯

《机器学习实战》系列博客主要是实现并理解书中的代码,相当于读书笔记了。毕竟实战不能光看书。动手就能遇到许多奇奇怪怪的问题。博文比较粗糙,需结合书本。博主边查边学,水平有限,有问题的地方评论区请多指教。书中的代码和数据,网上有很多请自行下载。
4.1 到 4.4 是一些相关的理论 。具体可以见博客 转载的斯坦福ML公开课笔记5

4.5使用python进行文本分类

从文本中获取特征,需要先拆分文本,特征来自于文本的词条。一个词条是字符的任意组合。

4.5.1 准备数据:从文本中构建词向量

文档 — 词条切割 — 词条集合(词汇表)— 文档转变为词向量(出现为1,否则0)

词表到向量的转换函数

#coding:utf-8
from numpy import *

def loadDataSet():
    postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                 ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                 ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                 ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                 ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec = [0,1,0,1,0,1]    #1 代表侮辱性文字 0代表非侮辱性
    return postingList,classVec

def createVocabList(dataSet):
    vocabSet = set([])  #创建空集
    for document in dataSet:
        vocabSet = vocabSet | set(document) #集合的并
    return list(vocabSet)    

def setOfWords2Vec(vocabList, inputSet):#词汇表 和输入数据集
    returnVec = [0]*len(vocabList)  
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] = 1
        else: print "the word: %s is not in my Vocabulary!" % word
    return returnVec
>>> import bayes
>>> listOposts ,listClasses = bayes.loadDataSet()
>>> myVocabList = bayes.createVocabList(listOposts)
>>> myVocabList
['cute', 'love', 'help', 'garbage', 'quit', 'I', 'problems', 'is', 'park', 'stop', 'flea', 'dalmation', 'licks', 'food', 'not', 'him', 'buying', 'posting', 'has', 'worthless', 'ate', 'to', 'maybe', 'please', 'dog', 'how', 'stupid', 'so', 'take', 'mr', 'steak', 'my']
>>> reload (bayes)
<module 'bayes' from 'bayes.py'>
>>> bayes.setOfWords2Vec(myVocabList,listOposts[0])
[0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1]
>>> bayes.setOfWords2Vec(myVocabList,listOposts[3])
[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
>>> 

4.5.2 训练算法:从词向量计算概率

概率的计算: 根据贝叶斯定理需要计算的有
先验概率 :P=(Y=Ck) 是某一类的概率 例:文档属于侮辱性文档的概率
条件概率: P=(X=Wi | Y=Ck) 在属于某一类的条件下,是特征Wi的概率,例:侮辱性文档出现词stupid概率

朴素贝叶斯分类器训练函数

def trainNB0(trainMatrix,trainCategory):  #输入为文档矩阵及对应的标签向量
    numTrainDocs = len(trainMatrix) #文档数
    numWords = len(trainMatrix[0])  #词汇表长度
    pAbusive = sum(trainCategory)/float(numTrainDocs) #属于侮辱性文档的概率
    p0Num = zeros(numWords); p1Num = zeros (numWords)      #长度为词汇表大小的0向量
    p0Denom = 0.0; p1Denom = 0.0                        
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:   #如果第i个文档是侮辱性文档
            p1Num += trainMatrix[i] #所有侮辱性文档词向量累加,每个分量就是该词的频数 
            p1Denom += sum(trainMatrix[i])  # 所有侮辱性文档 词总数
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = p1Num/p1Denom #向量除以浮点数
    p0Vect = p0Num/p0Denom
    return p0Vect,p1Vect,pAbusive
>>> from numpy import *
>>> reload(bayes)
<module 'bayes' from 'bayes.pyc'>
>>> listOposts ,listClasses = bayes.loadDataSet()
>>> myVocablist = bayes.createVocabList(listOposts)
>>> trainMat = []
>>> for postinDoc in listOposts :
...     trainMat.append(bayes.setOfWords2Vec(myVocablist,postinDoc))
... 
>>> trainMat 
[[0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0], [1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0]]
>>> 

trainMat 有6个文档 对应词向量构成矩阵
p0V :正常文档条件下出现某单词的概率

>>> import bayes 
>>> from numpy import *
>>> listOposts ,listClasses = bayes.loadDataSet()
>>> myVocabList = bayes.createVocabList(listOposts)
>>> trainMat = []
>>> for postinDoc in listOposts:
...     trainMat.append (bayes.setOfWords2Vec (myVocabList, postinDoc))
... 
>>> p0V ,p1V ,pAb = bayes.trainNB0 (trainMat, listClasses)
>>> p0V
array([ 0.04166667,  0.04166667,  0.04166667,  0.        ,  0.        ,
        0.04166667,  0.04166667,  0.04166667,  0.        ,  0.04166667,
        0.04166667,  0.04166667,  0.04166667,  0.        ,  0.        ,
        0.08333333,  0.        ,  0.        ,  0.04166667,  0.        ,
        0.04166667,  0.04166667,  0.        ,  0.04166667,  0.04166667,
        0.04166667,  0.        ,  0.04166667,  0.        ,  0.04166667,
        0.04166667,  0.125     ])
>>> p1V
array([ 0.        ,  0.        ,  0.        ,  0.05263158,  0.05263158,
        0.        ,  0.        ,  0.        ,  0.05263158,  0.05263158,
        0.        ,  0.        ,  0.        ,  0.05263158,  0.05263158,
        0.05263158,  0.05263158,  0.05263158,  0.        ,  0.10526316,
        0.        ,  0.05263158,  0.05263158,  0.        ,  0.10526316,
        0.        ,  0.15789474,  0.        ,  0.05263158,  0.        ,
        0.        ,  0.        ])
>>> pAb
0.5
>>> 

4.5.3 测试算法:根据现实情况修改分类器

要计算多个概率的乘积以获得文档属于某一类的概率,如果其中一个概率为0 ,那么最后也为0。
可以将所有的词出现次数初始化为1 ,分母初始化为2

    p0Num = ones(numWords);p1Num = ones(numWords) # 修改 拉普拉斯平滑
    p0Denom = 2.0 ;p1Denom = 2.0 
    p1Vect = log(p1Num/p1Denom) #修改取对数避免下溢或舍入导致误差
    p0Vect = log(p0Num/p0Denom)

朴素贝叶斯分类函数

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):#输入为要分类的向量以及用trainNB0函数计算的是三个概率
    p1 = sum(vec2Classify * p1Vec) + log(pClass1)    #ln(a*b)= ln(a) + ln(b)
    p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)# 乘* 是对应元素相乘
    if p1 > p0:
        return 1
    else: 
        return 0

def testingNB():
    listOPosts,listClasses = loadDataSet()
    myVocabList = createVocabList(listOPosts)
    trainMat=[]
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V,p1V,pAb = trainNB0(array(trainMat),array(listClasses))
    testEntry = ['love', 'my', 'dalmation']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
    testEntry = ['stupid', 'garbage']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
>>> reload(bayes)
<module 'bayes' from 'bayes.py'>
>>> bayes.testingNB()
['love', 'my', 'dalmation'] classified as:  0
['stupid', 'garbage'] classified as:  1
>>> 

4.5.4 准备数据:文档词袋模型

词集模型(setOfWord2Vec):将词是否出现作为特征
词袋模型(bagOfWord2VecMN): 每个词可以出现多次

朴素贝叶斯词袋模型

def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = [0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 1
    return returnVec
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值