jieba是优秀的中文分词第三方库
- 中文文本需要通过分词获得单个的词语;
- jieba是优秀的中文分词第三方库,需要额外安装;
- jieba库提供三种分词模式,最简单只需掌握一个函数。
(cmd命令行) pip install jieba
jieba分词的三种模式
精确模式、全模式、搜索引擎模式
- 精确模式:把文本精确的切分开,不存在冗余单词;
- 全模式:把文本中所有可能的词语都扫描出来,有冗余;
- 搜索引擎模式:在精确模式基础上,对长词再次切分。
代码版本1 :
import jieba
txt = open("threekingdoms.txt", "r", encoding='utf-8').read()
words = jieba.lcut(txt) # jieba库方法 把文本转换为单词列表,方便统计
counts = {}
for word in words:
if len(word) == 1: # 去符号,单字
continue
else:
counts[word] = counts.get(word,0) + 1 # 统计词频
items = list(counts.items())
for i in range(15):
items.sort(key=lambda x:x[1], reverse=True)
word, count = items[i]
print ("{0:<10}{1:>5}".format(word, count))
会发现结果并不如意,有些输出文字重复了。
以上代码看的不懂? 这里有详细解释,外加另一道题的理解哟,点击下方连接
Python——如何进行文本词频统计
代码版本2:
import jieba
excludes = {"将军","却说","荆州","二人","不可","不能","如此","丞相"}
txt = open("threekingdoms.txt", "r", encoding='utf-8').read()
words = jieba.lcut(txt)
counts = {}
for word in words:
if len(word) == 1:
continue
elif word == "诸葛亮" or word == "孔明曰":
rword = "孔明"
elif word == "关公" or word == "云长":
rword = "关羽"
elif word == "玄德" or word == "玄德曰":
rword = "刘备"
elif word == "孟德":
rword = "曹操"
else:
rword = word
counts[rword] = counts.get(rword,0) + 1
for word in excludes: # 删掉excludes里的词
del counts[word]
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(5):
word, count = items[i]
print ("{0:<10}{1:>5}".format(word, count))