TF-IDF
词频-逆文档频率(Term Frequency-Inverse Document Frequency,TF-IDF)是一种用于资讯检索与文本挖掘的常用加权技术;
TF-IDF是一种统计方法,用以评估一个字词对于一个文件集或一个语料库的其中一份文件的重要程度。字词的重要性随着它在文件中出现的次数成正比增加,但同时会随着它在语料库中出现的频率成反比下降;
TF-IDF的主要思想是:如果某个词或短语在一篇文章中出现的频率TF高,并且在其他文章中很少出现,则认为此词或者短语具有很好的类别区分能力,适合用来分类;
词频(Term Frequency,TF):
指的是某一个给定的词语在该文件中出现的频率。这个数字是对词数的归一化,以防止偏向更长的文件。(同一个词语在长文件里可能会比短文件有更高的词数,而不管该词语是否重要)
逆向文件频率(Inverse Document Frequency,IDF)
是一个词语普遍重要性的度量,某一特定词语的IDF,可以由总文档数目除以包含该词语之文档的数目,再将得到的商取对数得到
代码实现
步骤:
0.引入依赖
# 0.引入依赖
import numpy as np
import pandas as pd
1.定义数据和预处理
#1.定义数据和预处理
docA ="The cat sat on my bed The man i hate it"
docB ="The dog sat on my knees The girl i like it"
bowA =docA.split(" ")
bowB =docB.split(" ")
print(bowA)
print(bowB)
# 构建词库
wordSet =set(bowA).union(set(bowB))
print(wordSet)
[‘The’, ‘cat’, ‘sat’, ‘on’, ‘my’, ‘bed’, ‘The’, ‘man’, ‘i’, ‘hate’, ‘it’]
[‘The’, ‘dog’, ‘sat’, ‘on’, ‘my’, ‘knees’, ‘The’, ‘girl’, ‘i’, ‘like’, ‘it’]
{‘girl’, ‘bed’, ‘dog’, ‘man’, ‘i’, ‘hate’, ‘on’, ‘cat’, ‘sat’, ‘knees’, ‘like’, ‘it’, ‘my’, ‘The’}
2.进行词数统计
# 2.进行词数统计
# 用统计字典来保存词出现的次数
wordDictA =dict.fromkeys(wordSet,0)
wordDictB =dict.fromkeys(wordSet,0)
#遍历文档,统计次数
for word in bowA:
wordDictA[word] +=1
for word in bowB:
wordDictB[word] +=1
print(pd.DataFrame([wordDictA,wordDictB],index=['docA','docB']))
girl bed dog man i hate on cat sat knees like it my The
docA 0 1 0 1 1 1 1 1 1 0 0 1 1 2
docB 1 0 1 0 1 0 1 0 1 1 1 1 1 2
3.0计算词频TF
#3.0计算词频TF
def computeTF( wordDict, bow ):
# 用一个字典对象记录tf,把所有的词对应在bow文档里的tf都算出来
tfDict = {}
nbowCount = len(bow)
for word, count in wordDict.items():
tfDict[word] = count / nbowCount
return tfDict
tfA = computeTF( wordDictA, bowA )
tfB = computeTF( wordDictB, bowB )
print(tfA)
print(tfB)
{‘hate’: 0.09090909090909091, ‘cat’: 0.09090909090909091, ‘i’: 0.09090909090909091, ‘dog’: 0.0, ‘man’: 0.09090909090909091, ‘The’: 0.18181818181818182, ‘like’: 0.0, ‘girl’: 0.0, ‘my’: 0.09090909090909091, ‘it’: 0.09090909090909091, ‘knees’: 0.0, ‘sat’: 0.09090909090909091, ‘on’: 0.09090909090909091, ‘bed’: 0.09090909090909091}
{‘hate’: 0.0, ‘cat’: 0.0, ‘i’: 0.09090909090909091, ‘dog’: 0.09090909090909091, ‘man’: 0.0, ‘The’: 0.18181818181818182, ‘like’: 0.09090909090909091, ‘girl’: 0.09090909090909091, ‘my’: 0.09090909090909091, ‘it’: 0.09090909090909091, ‘knees’: 0.09090909090909091, ‘sat’: 0.09090909090909091, ‘on’: 0.09090909090909091, ‘bed’: 0.0}
4.计算逆文档频率idf
def computeIDF(wordDictList):
# 用一个字典对象保存idf结果,每个词作为key,初始值为0
idfDict =dict.fromkeys(wordDictList[0],0)
N = len(wordDictList)
import math
for wordDict in wordDictList:
# 遍历字典中的每个词汇,统计Ni
for word,count in wordDict.items():
if count > 0 :
# 先把Ni增加1,存入到idfDict
idfDict[word] += 1
# 已经得到所有词汇i对应的Ni,现在根据公式把它替换成idf值
for word ,ni in idfDict.items():
idfDict[word] = math.log10((N+1)/(ni+1))
return idfDict
idfs =computeIDF([wordDictA,wordDictB])
print(idfs)
{‘hate’: 0.17609125905568124, ‘cat’: 0.17609125905568124, ‘i’: 0.0, ‘dog’: 0.17609125905568124, ‘man’: 0.17609125905568124, ‘The’: 0.0, ‘like’: 0.17609125905568124, ‘girl’: 0.17609125905568124, ‘my’: 0.0, ‘it’: 0.0, ‘knees’: 0.17609125905568124, ‘sat’: 0.0, ‘on’: 0.0, ‘bed’: 0.17609125905568124}
5.计算TF-IDF
def computeTFIDF(tf,idfs):
tfidf ={}
for word,tfval in tf.items():
tfidf[word] =tfval * idfs[word]
return tfidf
tfidfA = computeTFIDF( tfA , idfs )
tfidfB = computeTFIDF( tfB , idfs )
print(pd.DataFrame([ tfidfA , tfidfB ]) )
hate cat i dog man The like girl my it knees sat on bed
0 0.016008 0.016008 0.0 0.000000 0.016008 0.0 0.000000 0.000000 0.0 0.0 0.000000 0.0 0.0 0.016008
1 0.000000 0.000000 0.0 0.016008 0.000000 0.0 0.016008 0.016008 0.0 0.0 0.016008 0.0 0.0 0.000000