1. 朴素贝叶斯
1.1 朴素贝叶斯的原理
原理参考:https://blog.csdn.net/llh_1178/article/details/79848922
https://www.cnblogs.com/hapjin/p/8119797.html
1.2 利用朴素贝叶斯模型进行文本分类
代码实现:
参考:https://blog.csdn.net/starmoth/article/details/88366732
1.初始化
def loadDataSet():#数据格式
postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
['stop', 'posting', 'stupid', 'worthless', 'garbage'],
['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
classVec = [0,1,0,1,0,1]#1 侮辱性文字 , 0 代表正常言论
return postingList,classVec
def createVocabList(dataSet):#创建词汇表
vocabSet = set([])
for document in dataSet:
vocabSet = vocabSet | set(document) #创建并集
return list(vocabSet)
def bagOfWord2VecMN(vocabList,inputSet):#根据词汇表,讲句子转化为向量
returnVec = [0]*len(vocabList)
for word in inputSet:
if word in vocabList:
returnVec[vocabList.index(word)] += 1
return returnVec
2.训练
def trainNB0(trainMatrix,trainCategory):
numTrainDocs = len(trainMatrix)
numWords = len(trainMatrix[0])
pAbusive = sum(trainCategory)/float(numTrainDocs)
p0Num = ones(numWords);p1Num = ones(numWords)#计算频数初始化为1
p0Denom = 2.0;p1Denom = 2.0 #即拉普拉斯平滑
for i in range(numTrainDocs):
if trainCategory[i]==1:
p1Num += trainMatrix[i]
p1Denom += sum(trainMatrix[i])
else:
p0Num += trainMatrix[i]
p0Denom += sum(trainMatrix[i])
p1Vect = log(p1Num/p1Denom)#注意
p0Vect = log(p0Num/p0Denom)#注意
return p0Vect,p1Vect,pAbusive#返回各类对应特征的条件概率向量
#和各类的先验概率
3.分类
def classifyNB(vec2Classify,p0Vec,p1Vec,pClass1):
p1 = sum(vec2Classify * p1Vec) + log(pClass1)#注意
p0 = sum(vec2Classify * p0Vec) + log(1-pClass1)#注意
if p1 > p0:
return 1
else:
return 0
def testingNB():#流程展示
listOPosts,listClasses = loadDataSet()#加载数据
myVocabList = createVocabList(listOPosts)#建立词汇表
trainMat = []
for postinDoc in listOPosts:
trainMat.append(setOfWords2Vec(myVocabList,postinDoc))
p0V,p1V,pAb = trainNB0(trainMat,listClasses)#训练
#测试
testEntry = ['love','my','dalmation']
thisDoc = setOfWords2Vec(myVocabList,testEntry)
print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
2.SVM模型
2.1 SVM原理
原理参考:https://blog.csdn.net/weixin_39605679/article/details/81170300
2.2 文本分类
其步骤实现如下:
1.爬取语料数据
2.语料的处理
3.抽取测试语料
4.分词处理
5.语料标注
6.打乱语料
7.特征提取(特征选择)
8.向量化
9.参数调优
10.训练模型
11.预测结果
3.LDA主题模型
原理参考:https://blog.csdn.net/Kaiyuan_sjtu/article/details/83572927
代码实现:
1、数据预处理
from gensim.test.utils import common_texts
from gensim.corpora.dictionary import Dictionary
# Create a corpus from a list of texts
common_dictionary = Dictionary(common_texts)
common_corpus = [common_dictionary.doc2bow(text) for text in common_texts]
# Train the model on the corpus.
lda = LdaModel(common_corpus, num_topics=10)
2、将文本转化为词袋模型
from gensim.corpora import Dictionary
dct = Dictionary(["máma mele maso".split(), "ema má máma".split()])
dct.doc2bow(["this", "is", "máma"])
[(2, 1)]
dct.doc2bow(["this", "is", "máma"], return_missing=True)
([(2, 1)], {u'this': 1, u'is': 1})
3、运用lda模型
from gensim.models import LdaModel
lda = LdaModel(common_corpus, num_topics=10)
lda.print_topic(1, topn=2)
'0.500*"9" + 0.045*"10"