【TF-IDF】用python根据tf-idf计算一个文档的关键词或者短语的权重,并生成词云

【TF-IDF】用python根据tf-idf计算一个文档的关键词或者短语的权重,并生成词云

1. 根据tf-idf计算一个文档的关键词或者短语:

代码如下:

from re import split
from jieba.posseg import dt
from sklearn.feature_extraction.text import TfidfVectorizer
from collections import Counter
from time import time
import jieba





FLAGS = set('a an b f i j l n nr nrfg nrt ns nt nz s t v vi vn z eng'.split())

def cut(text):
    for sentence in split('[^a-zA-Z0-9\u4e00-\u9fa5]+', text.strip()):
        for w in dt.cut(sentence):
            if len(w.word) > 2 and w.flag in FLAGS:
                yield w.word

class TFIDF:
    def __init__(self, idf):
        self.idf = idf

    @classmethod
    def train(cls, texts):
        model = TfidfVectorizer(tokenizer=cut)
        model.fit(texts)
        idf = {w: model.idf_[i] for w, i in model.vocabulary_.items()}
        return cls(idf)

    def get_idf(self, word):
        return self.idf.get(word, max(self.idf.values()))

    def extract(self, text, top_n=10):
        counter = Counter()
        for w in cut(text):
            counter[w] += self.get_idf(w)
        #return [i[0:2] for i in counter.most_common(top_n)]
        return [i[0] for i in counter.most_common(top_n)]


if __name__ == '__main__':
    t0 = time()
    with open('./nlp-homework.txt', encoding='utf-8')as f:
        _texts = f.read().strip().split('\n')
        # print(_texts)
    tfidf = TFIDF.train(_texts)
    # print(_texts)
    for _text in _texts:
        seq_list=jieba.cut(_text,cut_all=True)  #全模式
        # seq_list=jieba.cut(_text,cut_all=False)  #精确模式
        # seq_list=jieba.cut_for_search(_text,)    #搜索引擎模式
        # print(list(seq_list))
        print(tfidf.extract(_text))
        with open('./resultciyun.txt','a+', encoding='utf-8') as g:
            for i in tfidf.extract(_text):
                g.write(str(i) + " ")
    print(time() - t0)
2. 生成词云:

代码如下:

  • 注意需要安装pip install wordcloud
  • 以及为了保证中文字体正常显示,需要下载SimSun.ttf字体,并且将这个字体包也放在和程序相同的目录下;
from wordcloud import WordCloud

filename = "resultciyun.txt"
with open(filename) as f:
 resultciyun = f.read()

wordcloud = WordCloud(font_path="simsun.ttf").generate(resultciyun)
# %pylab inline
import matplotlib.pyplot as plt
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
3. 最后词云的图片

在这里插入图片描述

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值