使用spaCy对文本进行tokenization,定义custom_tokenize,添加special_cases,调用nlp将文本转化为doc对象,使用自定义规则对文本frac{uv}进行切分,输出token并进行debug,如下所示:
def custom_tokenizer(nlp):
special_cases = {"uv":[{"ORTH": "u"}, {"ORTH": "v"}] # 特例词典
prefix_re = re.compile(r'''^[\[\(\{]''') # 前缀标点
suffix_re = re.compile(r'''[\]\)\}]$''') # 后缀标点
infix_re = re.compile(r'''[-~=\{''') # 中间标点
simple_url_re = re.compile(r'''^https?://''')
return Tokenizer(nlp.vocab, rules=special_cases,
prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
url_match=simple_url_re.match)
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
text = "frac{uv}"
doc = nlp(text)
tok_exp = nlp.tokenizer.explain(text)
for t in tok_exp:
print(t[1], "\t", t[0])
输出结果如下:
frac TOKEN
{ INFIX
uv TOKEN
} SUFFIX
可以发现,uv并没有被切分开,阅读tokenizer.pyx发现下面两行代码,对于被中缀分割token,首先检查该token是否包含前缀、后缀、中缀和空格字符,若不包含则不添加matcher,因此无法匹配特殊字符。
if not self.faster_heuristics or self.find_prefix(string) or self.find_infix(string) or self.find_suffix(string) or " " in string:
self._special_matcher.add(string, None, self._tokenize_affixes(string, False))
因此,调整custom_tokenizer返回参数,设置faster_heuristics为False即可解决:
def custom_tokenizer(nlp):
special_cases = {"uv":[{"ORTH": "u"}, {"ORTH": "v"}] # 特例词典
prefix_re = re.compile(r'''^[\[\(\{]''') # 前缀标点
suffix_re = re.compile(r'''[\]\)\}]$''') # 后缀标点
infix_re = re.compile(r'''[-~=\{''') # 中间标点
simple_url_re = re.compile(r'''^https?://''')
return Tokenizer(nlp.vocab, rules=special_cases,
prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
url_match=simple_url_re.match,
faster_heuristics=False)