以下是关于小说的中文词频统计
这里有三个文件,分别为novel.txt、punctuation.txt、meaningless.txt。
这三个是小说文本、特殊符号和无意义词
Python代码统计词频如下:
import jieba # jieba中文分词库
# 从文件读入小说
with open('novel.txt', 'r', encoding='UTF-8') as novelFile:
novel = novelFile.read()
# 将小说中的特殊符号过滤
with open('punctuation.txt', 'r', encoding='UTF-8') as punctuationFile:
for punctuation in punctuationFile.readlines():
novel = novel.replace(punctuation[0], ' ')
# 添加特定词到词库
jieba.add_word('凤十')
jieba.add_word('林胖子')
jieba.add_word('黑道')
jieba.add_word('饿狼帮')
# 从文件独处无意义词
with open('meaningless.txt', 'r', encoding='UTF-8') as meaninglessFile:
mLessSet = set(meaninglessFile.read().split('\n'))