def get_tfidf(words_lists):
texts = words_lists
dictionary = corpora.Dictionary(texts)
feature_cnt = len(dictionary.token2id)
corpus = [dictionary.doc2bow(text) for text in texts]
tfidf = models.TfidfModel(corpus)
return tfidf, dictionary, corpus, feature_cnt
texts:二维数组,每一行代表一个句子,内容是分词结果。
dictionary:相当于建了个字典,键:索引,值:词。
corpus:把句子转化成每个词出现多少次,[[(索引1,次数), (索引2,次数), ...],[(索引0,次数), (索引2,次数), ...]。
tfidf:以当前语料建模。
def get_semantic_similarity_for_line(words_list1, tfidf, dictionary, corpus, feature_cnt):
kw_vector = dictionary.doc2bow(words_list1)#(jieba.lcut(keyword))
index = similarities.SparseMatrixSimilarity(tfidf[corpus], num_features=feature_cnt)
sim = index[tfid