主要参考的是deep-nlp的readme文件。
DeepNLP包括以下几个模块
NLP Pipeline Modules:
- Word Segmentation/Tokenization
- Part-of-speech (POS)
- Named-entity-recognition(NER)
- textsum: automatic summarization Seq2Seq-Attention models
- textrank: extract the most important sentences
- textcnn: document classification
- Web API: Free Tensorflow empowered web API
- Planed: Parsing, Automatic Summarization
Algorithm(Closely following the state-of-Art)
- Word Segmentation: Linear Chain CRF(conditional-random-field), based on python CRF++ module
- POS: LSTM/BI-LSTM network, based on Tensorflow
- NER: LSTM/BI-LSTM/LSTM-CRF network, based on Tensorflow
- Textsum: Seq2Seq with attention mechanism
- Texncnn: CNN
- Pre-trained Model
- Chinese: Segmentation, POS, NER (1998 china daily corpus)
- English: POS (brown corpus)
- For your Specific Language, you can easily use the script to train model with the corpus of your language choice.
安装
模型需要使用1.0版本的tensorflow。使用如下命令安装:
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.1-cp35-cp35m-linux_x86_64.whl
sudo pip install --upgrade $TF_BINARY_URL
模型不能使用python3。
使用如下命令安装
sudo pip install deepnlp
使用教程
下载预训练模型
使用pip命令安装的deepnlp并没有下载模型文件,所以需要另外下载,在python3使用如下命令:
import deepnlp
# Download all the modules
deepnlp.download()
# Download only specific module
deepnlp.download('segment')
deepnlp.download('pos')
deepnlp.download('ner')
deepnlp.download('textsum')
分词
运行如下python程序
#coding=utf-8
from __future__ import unicode_literals
from deepnlp import segmenter
text = "我刚刚在浙江卫视看了电视剧老九门,觉得陈伟霆很帅"
segList = segmenter.seg(text)
text_seg = " ".join(segList)
print (text.encode('utf-8'))
print (text_seg.encode('utf-8'))
提示出现如下错误
Traceback (most recent call last):
File "test_segment.py", line 4, in <module>
from deepnlp import segm