目录
数据
爬取学校官网的任意一篇新闻的内容,然后使用jieba模块对该新闻的内容进行关键字提取,使用得到的提取结果的权重值最高的前50个词语,绘制一个词云图。(这里爬的是我学校的官网)
代码
from urllib.request import urlopen
from bs4 import BeautifulSoup
from jieba.analyse import extract_tags
from pyecharts import WordCloud
response = urlopen("http://www.gxstnu.edu.cn/info/1024/10188.htm")
source = response.read()
source = str(source,encoding="utf-8")
soup01 = BeautifulSoup(source,features="html.parser")
div_content = soup01.find("div",attrs={"class":"v_news_content"})
# print(div_content)
soup02 = BeautifulSoup(str(div_content),features="html.parser")
p_list = soup02.find_all("p")
a_str = ""
for i in p_list:
a_str = a_str + i.text
# print(a_str)
# 计算权重 取出前50位
tf_idf_w = extract_tags(a_str,topK=50,withWeight=True)
print(tf_idf_w)
data_x = []
data_y = []
for i,j in tf_idf_w:
data_x.append(i)
data_y.append(j)
# print(data_x)
# print(data_y)
wc=WordCloud("关键字提取结果(前50位)")
wc.add("", data_x, data_y,shape='star')
wc.render("关键字.html")
效果图展示
注意:pyecharts是0.5.11版本。