首先从一个有很多词的文本中,构建词库字典,操作拼写单词(只改变一到两个字母),找到在文本中出现概率最大的
import collections
import re
f = open("big.txt","r")
text = f.read()
def words(text):
return re.findall('[a-z]+',text.lower())
def train(features):
"""建立一个字典结构,文本库中每一个词,都是字典的键
字典的值就是单词出现的频率
"""
model = collections.defaultdict(lambda: 1)
for f in features:
model[f] += 1
return model
NWORDS = train(words(text))
f.close()
def edits1(word):
"""改变单词"""
# 字母集
alphabet = 'abcdefghijklmnopqrstuvwxyz'
# 返回所有将单词分成两半元组列表
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
# 返回所有删去一个单词后的列表
deletes = [a + b[1:] for a, b in splits if b] # a,b 在元组中
# 交换相邻两个字母的列表
transpose = [a + b[1] + b[0] +b[2:] for a,b in splits if len(b)>1]
# 将word每一位替换成其他25个字母,形成新的单词
replaces = [a + c + b[1:] for a,b in splits for c in alphabet if b]
# 将word中插入一个字母,形成新的单词
inserts = [a + c + b for a, b in splits for c in alphabet]
# 返回一个集合,这里用到了列表拼接
return set(deletes + transpose + replaces + inserts)
def known_edits(word):
"""用了两次edits,生成相差两个字母的词的集合"""
return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)
def known(words):
"""返回集合中所有在文本中出现过单词的子集"""
return set(w for w in words if w in NWORDS)
def correct(word):
"""返回改正后出现率在高的单词"""
candidates = known([word]) or known(edits1(word)) or known_edits(word) or [word]
return max(candidates, key=NWORDS.get)
if __name__ == '__main__':
print(correct('speling'))