梳理一遍整个思路:
目的:新词发现
为此,你需要干嘛?
- 文本, 文本的词的划分, 划分的合理性
1.1 文本: 网上随便download一个
1.2 文本的词的划分:划窗—ngram() - 划分的合理性
2.1 内部的稳定性—内部的凝固度–互信息
1 n log p ( W ) p ( c 1 ) p ( c 2 ) ⋅ ( c n ) \cfrac{1}{n}\log{\cfrac{p(W)}{p(c_1)p(c_2)\cdotp(c_n)}} n1logp(c1)p(c2)⋅(cn)p(W)
- p ( W ) p(W) p(W)表示该划分下,该词词出现的概率,
- p ( c 1 ) , p ( c 2 ) , … , p ( c n ) p(c_1), p(c_2), \dots, p(c_n) p(c1),p(c2),…,p(cn)表示该词对应的每一个字出现的概率,
-
n
n
n表示
n
n
n个字的词组成
W
W
W
2.2 外部的多变—熵---左右熵
H ( U ) = E [ − log p i ] = ∑ i = 1 n p i log p i H(U) = E{[-\log{p_i}]} = \sum\limits_{i=1}^{n}{p_i \log{p_i}} H(U)=E[−logpi]=i=1∑npilogpi
代码实现:用类来实现
- 文本加载 load_corpus
class NewWordDetect:
def __init__(self, corpus_path):
self.corpus_path = corpus_path
self.max_word_length = 5
self.word_count = defaultdict(int)
self.left_neighbor = defaultdict(dict)
self.right_neighbor = defaultdict(dict)
self.load_corpus(self.corpus_path)
self.calc_pmi()
self.calc_entropy()
self.calc_word_values()
之后对于每一个函数的定义,第一个加载load_corpus(), 加载的同时 我希望实现划分,对于划分,定义一个函数来实现(ngram_count)-----
对于ngram_count函数,需要实现划分,对于一个未知的新词,可以直接用划窗实现, 但是我们还需要考虑左右邻,用于计算左右熵,因此可以同时进行,这样一次文本的扫描就可以完成我们对于任务的需求,节约时间成本。
接下来实现ngram_count函数
def ngram_count(self, sentence, word_length):
for i in range(len(sentence) - word_length + 1):
word = sentence[i:i + word_length]
word_count[word] += 1
# 是否为第一个词
if i > 1:
char = sentence[i-1]
self.left_neighbor[word][char] = self.left_neighbor[word].get(char, 0) + 1
if i + word_length < len(sentence):
char = sentence[i + word_length]
self.right_neighbor[word][char] = self.right_neighbor[word].get(char, 0) + 1
return
接下来可以加载语料了
def load_corpus(self):
with open(self.corpus_path, encoding='utf8') as f:
for line in f:
sentence = line.strip()
for word_length in range(1, self.max_word_length):
self.ngram_count(sentence, word_length)
return
接下来 计算互信息和左右熵,先计算左右熵,需要先计算出每一个词的左熵,右熵
先根据公式定义熵函数
def calc_entropy_by_word_count_dict(self, word_count_dict):
total = sum(word_count_dict.values())
entropy = sum([-(c / total) * math.log((c / total), 10) for c in word_count_dict.values()])
return entropy
计算左右熵
def calc_entropy(self):
self.word_left_entropy = {}
self.word_right_entropy = {}
for word, count_dict in self.left_neighbor.items():
self.word_left_entropy = self.calc_entropy_by_word_count_dict(count_dict)
for word, count_dict in self.right_neighbor.items():
self.word_right_entropy = self.calc_entropy_by_word_count_dict(count_dict)
return
接下来计算互信息,内部凝固度, p ( c i ) p(c_i) p(ci)是字的概率, p ( W ) p(W) p(W)是词的概率,所以需要先计算出不同词长度的个数(字是词长度为1的词长度)
def calc_total_count_by_lenght(self):
self.word_count_by_length = defaultdict(int)
for word, count in word_count.items():
word_count_by_length[len(word)] += count
return
计算互信息
def calc_pmi(self):
self.calc_total_count_by_lenght()
self.pmi = {}
for word, count in self.word_count.items():
p_word = count / self.word_count_by_length(len(word))
p_char = 1
for char in word:
p_char *= self.word_count[char] / self.word_count_by_length[1]
self.pmi[word] = math.log(p_word / p_char, 10) / len(word)
return
计算词的值
def calc_word_values(self):
self.word_values = {}
for word in self.pmi:
if len(word) < 2:
continue
le = self.word_left_entropy.get(word, 1e-3)
re = self.word_right_entropy.get(word, 1e-3)
self.word_values[word] = self.pmi[word] * min(re, le)
return
运行
if __name__ == "__main__":
nwd = NewWordDetect("sample_corpus.txt")
# print(nwd.word_count)
# print(nwd.left_neighbor)
# print(nwd.right_neighbor)
# print(nwd.pmi)
# print(nwd.word_left_entropy)
# print(nwd.word_right_entropy)
value_sort = sorted([(word, count) for word, count in nwd.word_values.items()], key=lambda x:x[1], reverse=True)
print([x for x, c in value_sort if len(x) == 2][:10])
print([x for x, c in value_sort if len(x) == 3][:10])
print([x for x, c in value_sort if len(x) == 4][:10])
结果:
['迁移', '考虑', '尽管', '任务', '整句', '优势', '研究', '接受', '包括', '语言']
['训练集', '经网络', '大匹配', '情况下', '历史性', 'CRF', '半监督', '然语言', 'Kit', '事实上']
['神经网络', '自然语言', '最大匹配', 'RF标注', '信息处理', '十年回顾', 'gram', ' Kit', '技术进步', '统计度量']