本设计基于Python3.6实现中英文词频统计功能
英文词频统计
统计哈姆雷特英文版,txt格式文件地址:
hamlet.txt
思路分析:
- 获取文件中词汇
- 转换为统一格式,如小写或者大写
- 切割词汇
- 循环遍历进行统计
- 打印输出
代码如下:
#获取单词函数定义
def getTxt():
txt = open('hamlet.txt').read()
txt = txt.lower()
for ch in '!"@#$%^&*()+,-./:;<=>?@[\\]_`~{|}': #替换特殊字符
txt.replace(ch, ' ')
return txt
#1.获取单词
hamletTxt = getTxt()
#2.切割为列表格式
txtArr = hamletTxt.split()
#3.遍历统计
counts = {}
for word in txtArr:
counts[word] = counts.get(word, 0) + 1
#4.转换格式,方便打印,将字典转换为列表
countsList = list(counts.items())
countsList.sort(key=lambda x:x[1], reverse=True)#按次数从大到小排序
#5.打印
for i in range(10):
word, count = countsList[i]
print('{0:<10}{1:>5}'.format(word,count))
注意:
1. 代码counts[word] = counts.get(word, 0) + 1
巧妙使用了字典的get函数,一句代码实现复杂功能
2. 代码countsList.sort(key=lambda x:x[1], reverse=True)
中sort函数的参数要注意
输出结果:
中文词频统计
本文中统计功能基于jieba三方库统计三国演义,txt格式文件地址:
三国演义.txt
思路分析:
- 获取文本字符串
- 切割字符为列表
- 循环遍历进行统计
打印输出
代码如下:
import jieba
txt = open('threekingdoms.txt','r',encoding='utf-8').read()
excludes = ['却说','二人','不可','军士','军马','引兵','不能','如此',\
'商议','荆州','如何','将军','次日','大喜','左右','天下',\
'东吴','于是','今日','不敢','魏兵','陛下','一人','都督',\
'人马','不知','汉中','只见','众将','后主','蜀兵']#排除词组
words = jieba.lcut(txt)
counts = {}
for word in words:
if len(word) == 1:
continue
elif word == '诸葛亮' or word == '孔明曰':
reword = '孔明'
elif word == '关公' or word == '云长':
reword = '关羽'
elif word == '玄德' or word == '玄德曰' or word == '主公':
reword = '刘备'
elif word == '孟德' or word == '丞相':
reword = '曹操'
else:
reword = word
counts[reword] = counts.get(reword, 0) + 1
#取出非人名词汇
for key in excludes:
del counts[key]
#转换格式,输出
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(10):
word, count = items[i]
print('{0:<5}{1:>5}次'.format(word, count))