文章目录
https://web.stanford.edu/~jurafsky/slp3/ inference
数据集:https://zhuanlan.zhihu.com/p/145436365?utm_source=wechat_session&utm_medium=social&utm_oi=50414917517312
书籍:https://web.stanford.edu/~jurafsky/slp3/
2.1 Regular Expressions
traditional text processing
text normalization: tokenizing, lemmatization(词形还原),stemming(词根),segmentation(分词)
Text Normalization 文本规范化
- Tokenizing (segmenting) words 分词 Natural Language Toolkit (NLTK)
- Normalizing word formats 词形式
- Segmenting sentences 断句text
text processing: 过程
类型 | 算法 |
---|---|
分词 | byte-pair encoding, or BPE, MaxMatch. |
词根 | The Porter stemmer(written rules) |
断句 | Stanford CoreNLP toolkit: 标点断句 |
最小编辑距离 edit distance :measuring how similar two strings are.
3 N-gram
P ( X 1 , X 2 . . . X N ) = Π k = 1 n P ( X k ∣ X 1 k − 1 ) P(X_1,X_2...X_N) = \Pi_{k=1}^{n}P(X_k|X_1^{k-1}) P(X1,X2...XN)=Πk=1nP(Xk∣X