gensim学习笔记(三)- 计算文档之间的相似性

Similarity interface

加载配置logging

>>> import logging>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

在之前的笔记中,我们介绍了向量空间模型和向量空间的变换,比如TF-IDF模型和LSI模型。通过这些转换,我们可以计算文档之间的相似性,或者计算一个特定的文档和文档集中文档的相似性,比如说一个查询(query)和一系列文档的相似性。接下来,通过具体操作实践一下.

>>> from gensim import corpora, models, similarities
>>> dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
>>> corpus = corpora.MmCorpus('/tmp/deerwester.mm') # comes from the first tutorial, "From strings to vectors"
>>> print(corpus)
MmCorpus(9 documents, 12 features, 28 non-zero entries)
>>> lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)

首先从本地加载数据,再经过LSI模型转换
现在假设有查询query:”Human computer interaction”,计算这个查询语句和其他9个文档之间的相似性,如下:

>>> doc = "Human computer interaction"
>>> vec_bow = dictionary.doc2bow(doc.lower().split())
>>> vec_lsi = lsi[vec_bow] # convert the query to LSI space
>>> print(vec_lsi)
[(0, -0.461821), (1, 0.070028)]

这里我们使用余弦相似度来度量两个向量之间的相似性

To prepare for similarity queries, we need to enter all documents which we want to compare against subsequent queries. In our case, they are the same nine documents used for training LSI, converted to 2-D LSA space. But that’s only incidental, we might also be indexing a different corpus altogether.

>>> index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it

关于similiarities.MatrixSimilarity,需要注意的是当语料库太大的时候,内存无法存放的时候,会报错,这时候,我们要使用 similarities.Similarity class,详细见http://radimrehurek.com/gensim/similarities/docsim.html

>>> index.save('/tmp/deerwester.index')
>>> index = similarities.MatrixSimilarity.load('/tmp/deerwester.index')

index同样可以保存和加载。
通过index,可以计算出查询“Huaman computer interaction”和每个文档的相似性,如下:

>>> sims = index[vec_lsi] # perform a similarity query against the corpus
>>> print(list(enumerate(sims))) # print (document_number, document_similarity) 2-tuples
[(0, 0.99809301), (1, 0.93748635), (2, 0.99844527), (3, 0.9865886), (4, 0.90755945),
(5, -0.12416792), (6, -0.1063926), (7, -0.098794639), (8, 0.05004178)]

>>> sims = sorted(enumerate(sims), key=lambda item: -item[1])
>>> print(sims) # print sorted (document number, similarity score) 2-tuples
[(2, 0.99844527), # The EPS user interface management system
(0, 0.99809301), # Human machine interface for lab abc computer applications
(3, 0.9865886), # System and human system engineering testing of EPS
(1, 0.93748635), # A survey of user opinion of computer system response time
(4, 0.90755945), # Relation of user perceived response time to error measurement
(8, 0.050041795), # Graph minors A survey
(7, -0.098794639), # Graph minors IV Widths of trees and well quasi ordering
(6, -0.1063926), # The intersection graph of paths in trees
(5, -0.12416792)] # The generation of random binary unordered trees

我们观察到了一个很有趣的结果,documents no. 2 (“The EPS user interface management system”) 和 4 (“Relation of user perceived responsetime to error measurement”) 没有一个单词和查询”Human computer interaction”相同,然而,在使用了LSI变换之后,观察到它们都和查询语句有很高的相似度量 (no. 2 是最相似的!), 这也符合我们的直观逻辑,因为和“computer-human” 是和查询语句相关的话题. 这也是我们为什么一开始要做LSI变换的原因,如果仅仅考虑词袋模型,那结果可能会很差。

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值