今天用一个最最最简单的例子说明 爬取数据—>数据保存and处理—>生成文章词云 (有手就能学废,么么么么么哒~)
- 数据的爬取
url = 'https://baijiahao.baidu.com/s?id=1667352390528014727&wfr=spider&for=pc'
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
}
html = requests.get(url, headers=headers)
soup = bs(html.text, 'lxml')
p_tag = soup.find_all('p')
for i in p_tag:
with open('./政府.txt', 'a+', encoding='utf-8') as f:
f.write(i.getText())
- 数据的处理,用到了 jieba 库里的关键字提取
file_path = './政府.txt'
file = open(file_path, 'r', encoding="utf-8").read()
f = jieba.analyse.extract_tags(file, topK=30, withWeight=False)
f = ' '.join(f)
print(f)
- 生成词云并存储, 显示
cloud = WordCloud(
font_path="C:/Windows/Fonts/simfang.ttf",
background_color='black',
width=1000,
height=1000
).generate(f)
plt.imshow(cloud, interpolation="bilinear")
plt.axis('off')
cloud.to_file("./government_image.png")
plt.show()
- 效果图
- 这是一个十分简单的例子, 你也可以设置更好玩儿的参数来展示自己的个性, 如果文章哪里有错误, 请您在评论区指明,谢啦 !