doc2vec

在我们做文本处理的时候,经常需要对两篇文档是否相似做处理或者根据输入的文档,找出最相似的文档。

幸好gensim提供了这样的工具,具体的处理思路如下,对于中文文本的比较,先需要做分词处理,根据分词的结果生成一个字典,然后再根据字典把原文档转化成向量。然后去训练相似度。把对应的文档构建一个索引,原文描述如下:

The main class is Similarity, which builds an index for a given set of documents. Once the index is built, you can perform efficient queries like “Tell me how similar is this query document to each document in the index?”. The result is a vector of numbers as large as the size of the initial set of documents, that is, one float for each index document. Alternatively, you can also request only the top-N most similar index documents to the query.

第一种方法,使用docsim(推荐使用,结果比较稳定)

示例代码:为了清楚的查看结果,对训练数据做了标号

[python] view plain copy
print ?
  1. # 训练样本  
  2. raw_documents = [  
  3.     ’0无偿居间介绍买卖毒品的行为应如何定性’,  
  4.     ’1吸毒男动态持有大量毒品的行为该如何认定’,  
  5.     ’2如何区分是非法种植毒品原植物罪还是非法制造毒品罪’,  
  6.     ’3为毒贩贩卖毒品提供帮助构成贩卖毒品罪’,  
  7.     ’4将自己吸食的毒品原价转让给朋友吸食的行为该如何认定’,  
  8.     ’5为获报酬帮人购买毒品的行为该如何认定’,  
  9.     ’6毒贩出狱后再次够买毒品途中被抓的行为认定’,  
  10.     ’7虚夸毒品功效劝人吸食毒品的行为该如何认定’,  
  11.     ’8妻子下落不明丈夫又与他人登记结婚是否为无效婚姻’,  
  12.     ’9一方未签字办理的结婚登记是否有效’,  
  13.     ’10夫妻双方1990年按农村习俗举办婚礼没有结婚证 一方可否起诉离婚’,  
  14.     ’11结婚前对方父母出资购买的住房写我们二人的名字有效吗’,  
  15.     ’12身份证被别人冒用无法登记结婚怎么办?’,  
  16.     ’13同居后又与他人登记结婚是否构成重婚罪’,  
  17.     ’14未办登记只举办结婚仪式可起诉离婚吗’,  
  18.     ’15同居多年未办理结婚登记,是否可以向法院起诉要求离婚’  
  19. ]  
  20. corpora_documents = []  
  21. for item_text in raw_documents:  
  22.     item_str = util_words_cut.get_class_words_list(item_text)  
  23.     corpora_documents.append(item_str)  
  24.   
  25. # 生成字典和向量语料  
  26. dictionary = corpora.Dictionary(corpora_documents)  
  27. corpus = [dictionary.doc2bow(text) for text in corpora_documents]  
  28.   
  29. similarity = Similarity(’-Similarity-index’, corpus, num_features=400)  
  30.   
  31. test_data_1 = ’你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚’  
  32. test_cut_raw_1 = util_words_cut.get_class_words_list(test_data_1)  
  33. test_corpus_1 = dictionary.doc2bow(test_cut_raw_1)  
  34. similarity.num_best = 5  
  35. print(similarity[test_corpus_1])  # 返回最相似的样本材料,(index_of_document, similarity) tuples  
  36.   
  37. print(‘################################’)  
  38.   
  39. test_data_2 = ’家人因涉嫌运输毒品被抓,她只是去朋友家探望朋友的,结果就被抓了,还在朋友家收出毒品,可家人的身上和行李中都没有。现在已经拘留10多天了,请问会被判刑吗’  
  40. test_cut_raw_2 = util_words_cut.get_class_words_list(test_data_2)  
  41. test_corpus_2 = dictionary.doc2bow(test_cut_raw_2)  
  42. similarity.num_best = 5  
  43. print(similarity[test_corpus_2])  # 返回最相似的样本材料,(index_of_document, similarity) tuples  
# 训练样本 
raw_documents = [
‘0无偿居间介绍买卖毒品的行为应如何定性’,
‘1吸毒男动态持有大量毒品的行为该如何认定’,
‘2如何区分是非法种植毒品原植物罪还是非法制造毒品罪’,
‘3为毒贩贩卖毒品提供帮助构成贩卖毒品罪’,
‘4将自己吸食的毒品原价转让给朋友吸食的行为该如何认定’,
‘5为获报酬帮人购买毒品的行为该如何认定’,
‘6毒贩出狱后再次够买毒品途中被抓的行为认定’,
‘7虚夸毒品功效劝人吸食毒品的行为该如何认定’,
‘8妻子下落不明丈夫又与他人登记结婚是否为无效婚姻’,
‘9一方未签字办理的结婚登记是否有效’,
‘10夫妻双方1990年按农村习俗举办婚礼没有结婚证 一方可否起诉离婚’,
‘11结婚前对方父母出资购买的住房写我们二人的名字有效吗’,
‘12身份证被别人冒用无法登记结婚怎么办?’,
‘13同居后又与他人登记结婚是否构成重婚罪’,
‘14未办登记只举办结婚仪式可起诉离婚吗’,
‘15同居多年未办理结婚登记,是否可以向法院起诉要求离婚’
]
corpora_documents = []
for item_text in raw_documents:
item_str = util_words_cut.get_class_words_list(item_text)
corpora_documents.append(item_str)

生成字典和向量语料

dictionary = corpora.Dictionary(corpora_documents)
corpus = [dictionary.doc2bow(text) for text in corpora_documents]

similarity = Similarity('-Similarity-index', corpus, num_features=400)

test_data_1 = '你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚'
test_cut_raw_1 = util_words_cut.get_class_words_list(test_data_1)
test_corpus_1 = dictionary.doc2bow(test_cut_raw_1)
similarity.num_best = 5
print(similarity[test_corpus_1]) # 返回最相似的样本材料,(index_of_document, similarity) tuples

print('################################')

test_data_2 = '家人因涉嫌运输毒品被抓,她只是去朋友家探望朋友的,结果就被抓了,还在朋友家收出毒品,可家人的身上和行李中都没有。现在已经拘留10多天了,请问会被判刑吗'
test_cut_raw_2 = util_words_cut.get_class_words_list(test_data_2)
test_corpus_2 = dictionary.doc2bow(test_cut_raw_2)
similarity.num_best = 5
print(similarity[test_corpus_2]) # 返回最相似的样本材料,(index_of_document, similarity) tuples运行结果如下:

[python] view plain copy
print ?
  1. /usr/bin/python3.4 /data/work/python-workspace/test_doc_similarity.py  
  2. Building prefix dict from the default dictionary …  
  3. Building prefix dict from the default dictionary …  
  4. Loading model from cache /tmp/jieba.cache  
  5. Loading model from cache /tmp/jieba.cache  
  6. Loading model cost 0.521 seconds.  
  7. Loading model cost 0.521 seconds.  
  8. Prefix dict has been built succesfully.  
  9. Prefix dict has been built succesfully.  
  10. adding document #0 to Dictionary(0 unique tokens: [])  
  11. built Dictionary(61 unique tokens: [‘丈夫’‘法院’‘结婚’‘住房’‘出资’]…) from 16 documents (total 89 corpus positions)  
  12. starting similarity index under -Similarity-index  
  13. [(140.40824830532073975), (150.40824830532073975), (100.35355338454246521)]  
  14. ################################  
  15. creating sparse index  
  16. creating sparse matrix from corpus  
  17. PROGRESS: at document #0/16  
  18. created <16x400 sparse matrix of type ‘<class ’numpy.float32‘>’  
  19.     with 86 stored elements in Compressed Sparse Row format>  
  20. creating sparse shard #0  
  21. saving index shard to -Similarity-index.0  
  22. saving SparseMatrixSimilarity object under -Similarity-index.0, separately None  
  23. loading SparseMatrixSimilarity object from -Similarity-index.0  
  24. [(60.50395262241363525), (20.47140452265739441), (40.33333337306976318), (10.29814240336418152), (50.29814240336418152)]  
  25.   
  26. Process finished with exit code 0  
/usr/bin/python3.4 /data/work/python-workspace/test_doc_similarity.py 
Building prefix dict from the default dictionary ...
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model from cache /tmp/jieba.cache
Loading model cost 0.521 seconds.
Loading model cost 0.521 seconds.
Prefix dict has been built succesfully.
Prefix dict has been built succesfully.
adding document #0 to Dictionary(0 unique tokens: [])
built Dictionary(61 unique tokens: ['丈夫', '法院', '结婚', '住房', '出资']...) from 16 documents (total 89 corpus positions)
starting similarity index under -Similarity-index
[(14, 0.40824830532073975), (15, 0.40824830532073975), (10, 0.35355338454246521)]

#

creating sparse index
creating sparse matrix from corpus
PROGRESS: at document #0/16
created <16x400 sparse matrix of type '<class 'numpy.float32'>'
with 86 stored elements in Compressed Sparse Row format>
creating sparse shard #0
saving index shard to -Similarity-index.0
saving SparseMatrixSimilarity object under -Similarity-index.0, separately None
loading SparseMatrixSimilarity object from -Similarity-index.0
[(6, 0.50395262241363525), (2, 0.47140452265739441), (4, 0.33333337306976318), (1, 0.29814240336418152), (5, 0.29814240336418152)]

Process finished with exit code 0

对于第1个测试问题:原文档中14,15,10和其相似,后面是对应的相似度

对于第2个测试问题:原文档中6,2,4,1,5和其相似,后面是对应的相似度

第二种方法,使用doc2vec

看了gensim的官方文档,写的不好,同样是使用上面的数据做测试,代码及结果如下:

[python] view plain copy
print ?
  1. # 使用doc2vec来判断  
  2. cores = multiprocessing.cpu_count()  
  3. print(cores)  
  4. corpora_documents = []  
  5. for i, item_text in enumerate(raw_documents):  
  6.     words_list = util_words_cut.get_class_words_list(item_text)  
  7.     document = TaggedDocument(words=words_list, tags=[i])  
  8.     corpora_documents.append(document)  
  9.   
  10. print(corpora_documents[:2])  
  11.   
  12. model = Doc2Vec(size=89, min_count=1, iter=10)  
  13. model.build_vocab(corpora_documents)  
  14. model.train(corpora_documents)  
  15.   
  16. print(‘#########’, model.vector_size)  
  17.   
  18. test_data_1 = ’你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚’  
  19. test_cut_raw_1 = util_words_cut.get_class_words_list(test_data_1)  
  20. print(test_cut_raw_1)  
  21. inferred_vector = model.infer_vector(test_cut_raw_1)  
  22. print(inferred_vector)  
  23. sims = model.docvecs.most_similar([inferred_vector], topn=3)  
  24. print(sims)  
# 使用doc2vec来判断
cores = multiprocessing.cpu_count()
print(cores)
corpora_documents = []
for i, item_text in enumerate(raw_documents):
    words_list = util_words_cut.get_class_words_list(item_text)
    document = TaggedDocument(words=words_list, tags=[i])
    corpora_documents.append(document)

print(corpora_documents[:2])

model = Doc2Vec(size=89, min_count=1, iter=10)
model.build_vocab(corpora_documents)
model.train(corpora_documents)

print('#########', model.vector_size)

test_data_1 = '你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚'
test_cut_raw_1 = util_words_cut.get_class_words_list(test_data_1)
print(test_cut_raw_1)
inferred_vector = model.infer_vector(test_cut_raw_1)
print(inferred_vector)
sims = model.docvecs.most_similar([inferred_vector], topn=3)
print(sims)
控制台打印的相关信息如下:
[python] view plain copy
print ?
  1. Pattern library is not installed, lemmatization won’t be available.  
  2. ’pattern’ package not found; tag filters are not available for English  
  3. Building prefix dict from the default dictionary …  
  4. Building prefix dict from the default dictionary …  
  5. Loading model from cache /tmp/jieba.cache  
  6. Loading model from cache /tmp/jieba.cache  
  7. 4  
  8. Loading model cost 0.513 seconds.  
  9. Loading model cost 0.513 seconds.  
  10. Prefix dict has been built succesfully.  
  11. Prefix dict has been built succesfully.  
  12. consider setting layer size to a multiple of 4 for greater performance  
  13. collecting all words and their counts  
  14. PROGRESS: at example #0, processed 0 words (0/s), 0 word types, 0 tags  
  15. collected 61 word types and 16 unique tags from a corpus of 16 examples and 89 words  
  16. min_count=1 retains 61 unique words (drops 0)  
  17. min_count leaves 89 word corpus (100% of original 89)  
  18. deleting the raw counts dictionary of 61 items  
  19. sample=0 downsamples 0 most-common words  
  20. downsampling leaves estimated 89 word corpus (100.0% of prior 89)  
  21. estimated required memory for 61 words and 89 dimensions: 91828 bytes  
  22. constructing a huffman tree from 61 words  
  23. built huffman tree with maximum node depth 7  
  24. resetting layer weights  
  25. training model with 1 workers on 61 vocabulary and 89 features, using sg=0 hs=1 sample=0 negative=0  
  26. expecting 16 sentences, matching count from corpus used for vocabulary survey  
  27. [TaggedDocument(words=[’无偿’‘居间’‘介绍’‘买卖’‘毒品’‘定性’], tags=[0]), TaggedDocument(words=[‘吸毒’‘动态’‘持有’‘毒品’‘认定’], tags=[1])]  
  28. worker thread finished; awaiting finish of 0 more threads  
  29. training on 890 raw words (1050 effective words) took 0.0s506992 effective words/s  
  30. under 10 jobs per worker: consider setting a smaller `batch_words’ for smoother alpha decay  
  31. ######### 89  
  32. [’离婚’‘孩子’‘自动’‘生效’‘离婚’]  
  33. [  2.54629389e-03   1.87756249e-03  -9.76708368e-04  -5.15014399e-03  
  34.   -7.54948880e-04  -3.74549557e-03   5.37392031e-03   3.35739669e-03  
  35.   -3.50345811e-03   2.63415743e-03  -1.32059853e-03  -4.15759953e-03  
  36.   -2.39425618e-03  -6.20105816e-03  -1.42006821e-03  -4.64246795e-03  
  37.    3.78829846e-03   1.47493952e-03   4.49652784e-03  -5.57655795e-03  
  38.   -1.40081509e-04  -7.10823014e-03  -5.34327468e-04  -4.21888893e-03  
  39.   -2.96280603e-03   6.52066898e-04   5.98943839e-03  -4.01164964e-03  
  40.    2.49637989e-03  -9.08742077e-04   4.65002051e-03   9.24886088e-04  
  41.    1.67128560e-03  -1.93383044e-03  -4.58135502e-03   1.78024184e-03  
  42.   -9.60796722e-04   7.26479106e-04   4.50814469e-03   2.58095766e-04  
  43.   -4.53767460e-03  -1.72883295e-03  -3.89566552e-03   4.85864235e-03  
  44.    5.90517826e-04   4.30173194e-03   3.37816169e-03  -1.08716707e-03  
  45.    1.85196218e-03   1.94042712e-03   1.20989932e-03  -4.69703926e-03  
  46.   -5.35873650e-03  -1.35291950e-03  -4.62053996e-03   2.15436472e-03  
  47.    4.05823253e-03   8.01778078e-05  -3.84314684e-03   1.11574796e-03  
  48.   -4.36050585e-03  -3.31182266e-03  -2.15692003e-03  -2.09038518e-03  
  49.    4.50274721e-03  -1.85286190e-04  -5.09306230e-03  -1.12043330e-04  
  50.    8.25022871e-04   2.60405545e-03  -1.73542544e-03   5.14509249e-03  
  51.   -9.16058663e-04   1.01291772e-03  -7.90049613e-04   4.20650374e-03  
  52.   -3.00139328e-03   3.34924040e-03  -2.11520446e-03   4.79168072e-03  
  53.    2.11459701e-03  -3.07943812e-03  -5.09956060e-03  -2.34926818e-03  
  54.    7.30032055e-03  -5.31428820e-03  -2.96888268e-03   4.95154131e-03  
  55.    3.09590902e-03]  
  56. [(150.2670447528362274), (140.18831682205200195), (100.07022987306118011)]  
  57. precomputing L2-norms of doc weight vectors  
Pattern library is not installed, lemmatization won't be available.
'pattern' package not found; tag filters are not available for English
Building prefix dict from the default dictionary ...
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model from cache /tmp/jieba.cache
4
Loading model cost 0.513 seconds.
Loading model cost 0.513 seconds.
Prefix dict has been built succesfully.
Prefix dict has been built succesfully.
consider setting layer size to a multiple of 4 for greater performance
collecting all words and their counts
PROGRESS: at example #0, processed 0 words (0/s), 0 word types, 0 tags
collected 61 word types and 16 unique tags from a corpus of 16 examples and 89 words
min_count=1 retains 61 unique words (drops 0)
min_count leaves 89 word corpus (100% of original 89)
deleting the raw counts dictionary of 61 items
sample=0 downsamples 0 most-common words
downsampling leaves estimated 89 word corpus (100.0% of prior 89)
estimated required memory for 61 words and 89 dimensions: 91828 bytes
constructing a huffman tree from 61 words
built huffman tree with maximum node depth 7
resetting layer weights
training model with 1 workers on 61 vocabulary and 89 features, using sg=0 hs=1 sample=0 negative=0
expecting 16 sentences, matching count from corpus used for vocabulary survey
[TaggedDocument(words=['无偿', '居间', '介绍', '买卖', '毒品', '定性'], tags=[0]), TaggedDocument(words=['吸毒', '动态', '持有', '毒品', '认定'], tags=[1])]
worker thread finished; awaiting finish of 0 more threads
training on 890 raw words (1050 effective words) took 0.0s, 506992 effective words/s
under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay




### 89

['离婚', '孩子', '自动', '生效', '离婚']
[ 2.54629389e-03 1.87756249e-03 -9.76708368e-04 -5.15014399e-03
-7.54948880e-04 -3.74549557e-03 5.37392031e-03 3.35739669e-03
-3.50345811e-03 2.63415743e-03 -1.32059853e-03 -4.15759953e-03
-2.39425618e-03 -6.20105816e-03 -1.42006821e-03 -4.64246795e-03
3.78829846e-03 1.47493952e-03 4.49652784e-03 -5.57655795e-03
-1.40081509e-04 -7.10823014e-03 -5.34327468e-04 -4.21888893e-03
-2.96280603e-03 6.52066898e-04 5.98943839e-03 -4.01164964e-03
2.49637989e-03 -9.08742077e-04 4.65002051e-03 9.24886088e-04
1.67128560e-03 -1.93383044e-03 -4.58135502e-03 1.78024184e-03
-9.60796722e-04 7.26479106e-04 4.50814469e-03 2.58095766e-04
-4.53767460e-03 -1.72883295e-03 -3.89566552e-03 4.85864235e-03
5.90517826e-04 4.30173194e-03 3.37816169e-03 -1.08716707e-03
1.85196218e-03 1.94042712e-03 1.20989932e-03 -4.69703926e-03
-5.35873650e-03 -1.35291950e-03 -4.62053996e-03 2.15436472e-03
4.05823253e-03 8.01778078e-05 -3.84314684e-03 1.11574796e-03
-4.36050585e-03 -3.31182266e-03 -2.15692003e-03 -2.09038518e-03
4.50274721e-03 -1.85286190e-04 -5.09306230e-03 -1.12043330e-04
8.25022871e-04 2.60405545e-03 -1.73542544e-03 5.14509249e-03
-9.16058663e-04 1.01291772e-03 -7.90049613e-04 4.20650374e-03
-3.00139328e-03 3.34924040e-03 -2.11520446e-03 4.79168072e-03
2.11459701e-03 -3.07943812e-03 -5.09956060e-03 -2.34926818e-03
7.30032055e-03 -5.31428820e-03 -2.96888268e-03 4.95154131e-03
3.09590902e-03]
[(15, 0.2670447528362274), (14, 0.18831682205200195), (10, 0.07022987306118011)]
precomputing L2-norms of doc weight vectors使用doc2vec结果不是很稳定,可能是我没有正确的使用吧,不过我看官方文档也没有找到比较有用的信息

文档相关链接如下: https://radimrehurek.com/gensim/models/doc2vec.html

第三种方式:使用LSH(LSH原理请见百度搜索)

sciket-learn提供了lsh的实现,当然github上也有lsh的实现。sciket-learn上是提供的lsh树。

LSH Forest: Locality Sensitive Hashing forest [1] is an alternative method for vanilla approximate nearest neighbor search methods. LSH forest data structure has been implemented using sorted arrays and binary search and 32 bit fixed-length hashes. Random projection is used as the hash family which approximates cosine distance.

还是使用同样的测试数据,代码如下:

[python] view plain copy
print ?
  1. # 使用lsh来处理  
  2. tfidf_vectorizer = TfidfVectorizer(min_df=3, max_features=None, ngram_range=(12), use_idf=1, smooth_idf=1,sublinear_tf=1)  
  3. train_documents = []  
  4. for item_text in raw_documents:  
  5.     item_str = util_words_cut.get_class_words_with_space(item_text)  
  6.     train_documents.append(item_str)  
  7. x_train = tfidf_vectorizer.fit_transform(train_documents)  
  8.   
  9. test_data_1 = ’你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚’  
  10. test_cut_raw_1 = util_words_cut.get_class_words_with_space(test_data_1)  
  11. x_test = tfidf_vectorizer.transform([test_cut_raw_1])  
  12.   
  13. lshf = LSHForest(random_state=42)  
  14. lshf.fit(x_train.toarray())  
  15.   
  16. distances, indices = lshf.kneighbors(x_test.toarray(), n_neighbors=3)  
  17. print(distances)  
  18. print(indices)  
# 使用lsh来处理
tfidf_vectorizer = TfidfVectorizer(min_df=3, max_features=None, ngram_range=(1, 2), use_idf=1, smooth_idf=1,sublinear_tf=1)
train_documents = []
for item_text in raw_documents:
    item_str = util_words_cut.get_class_words_with_space(item_text)
    train_documents.append(item_str)
x_train = tfidf_vectorizer.fit_transform(train_documents)

test_data_1 = '你好,我想问一下我想离婚他不想离,孩子他说不要,是六个月就自动生效离婚'
test_cut_raw_1 = util_words_cut.get_class_words_with_space(test_data_1)
x_test = tfidf_vectorizer.transform([test_cut_raw_1])

lshf = LSHForest(random_state=42)
lshf.fit(x_train.toarray())

distances, indices = lshf.kneighbors(x_test.toarray(), n_neighbors=3)
print(distances)
print(indices)
控制台打钱的信息如下,基本和docsim一致

[python] view plain copy
print ?
  1. [[ 0.42264973  0.42264973  0.48875208]]  
  2. [[10 15 14]]  
[[ 0.42264973  0.42264973  0.48875208]]
[[10 15 14]]

以上是自己找出来的用来比较文本相似度的实现,不过一般lsh比较适合做短文本的比较。


                </div>
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值