准备工作:
1.下载Anaconda
2.准备好制作词云的文本
3.安装词云包 pip install coludword
4.安装jieba切词包 pip install jieba
1.引入相关的库包:
- #coding:utf-8
- import jieba #分词包
- import numpy #numpy计算包
- import codecs #codecs提供的open方法来指定打开的文件的语言编码,它会在读取的时候自动转换为内部unicode
- import pandas
- import matplotlib
- matplotlib.use('TKAgg')
- import matplotlib.pyplot as plt
- from wordcloud import WordCloud#词云包
2.导入皇帝的新衣文件
- file=codecs.open(u"1.txt",'r')
- content=file.read()
- file.close()
3.用jieba进行分词
- segment=[]
- segs=jieba.cut(content) #切词
- for seg in segs:
- if len(seg)>1 and seg!='\r\n':
- segment.append(seg)
4.去高频语气词
- words_df=pandas.DataFrame({'segment':segment})
- words_df.head()
- stopwords=pandas.read_csv("not.txt",index_col=False,quoting=3,sep="\t",names=['stopword'],encoding="utf8")#去掉我们不需要的高频语气词等
- words_df=words_df[~words_df.segment.isin(stopwords.stopword)]
5.统计词频
- words_stat=words_df.groupby(by=['segment'])['segment'].agg({"计数":numpy.size})
- words_stat=words_stat.reset_index().sort_values(by="计数",ascending=False)
6.使用词云包制作词云
- from scipy.misc import imread
- import matplotlib.pyplot as plt
- from wordcloud import WordCloud,ImageColorGenerator
- %matplotlib inline
- bimg=imread('timg.jpeg')
- wordcloud=WordCloud(background_color="white",mask=bimg,font_path='simhei.ttf')
- #wordcloud=wordcloud.fit_words(words_stat.head(4000).itertuples(index=False))
- words = words_stat.set_index("segment").to_dict()
- wordcloud=wordcloud.fit_words(words["计数"])
- bimgColors=ImageColorGenerator(bimg)
- plt.axis("off")
- plt.imshow(wordcloud.recolor(color_func=bimgColors))
- plt.show()
效果如下图: