Tensorflow - Named Entity Recognition

Tensorflow - Named Entity Recognition

Each folder contains a standalone, short (~100 lines of Tensorflow), main.py that implements a neural-network based model for Named Entity Recognition (NER) using tf.estimator and tf.data.

Named Entity Recognition

These implementations are simple, efficient, and state-of-the-art, in the sense that they do as least as well as the results reported in the papers. The best model achieves in average an f1 score of 91.21. To my knowledge, existing implementations available on the web are convoluted, outdated and not always accurate (including my previous work). This repo is an attempt to fix this, in the hope that it will enable people to test and validate new ideas quickly.

The script lstm_crf/main.py can also be seen as a simple introduction to Tensorflow high-level APIs tf.estimator and tf.data applied to Natural Language Processing. Here is a longer discussion about this implementation along with an introduction to tf.estimator and tf.data

 

Install

You need python3 -- If you haven't switched yet, do it.

You need to install tf_metrics (multi-class precision, recall and f1 metrics for Tensorflow).

pip install git+https://github.com/guillaumegenthial/tf_metrics.git

OR

git clone https://github.com/guillaumegenthial/tf_metrics.git
cd tf_metrics
pip install .

 

Data Format

Follow the data/example.

  1. For name in {train, testa, testb}, create files {name}.words.txt and {name}.tags.txt that contain one sentence per line, each word / tag separated by space. I recommend using the IOBES tagging scheme.
  2. Create files vocab.words.txt, vocab.tags.txt and vocab.chars.txt that contain one token per line.
  3. Create a glove.npz file containing one array embeddings of shape (size_vocab_words, 300) using GloVe 840B vectors and np.savez_compressed.

An example of scripts to build the vocab and the glove.npz files from the {name}.words.txt and {name}.tags.txt files is provided in data/example. See

  1. build_vocab.py
  2. build_glove.py'

Data Format

If you just want to get started, once you have created your {name}.words.txt and {name}.tags.txt files, simply do

cd data/example
make download-glove
make build

(These commands will build the example dataset)

Note that the example dataset is here for debugging purposes only and won't be of much use to train an actual model

 

Get Started

Once you've produced all the required data files, simply pick one of the main.py scripts. Then, modify the DATADIR variable at the top of main.py.

To train, evaluate and write predictions to file, run

cd models/lstm_crf
python main.py

(These commands will train a bi-LSTM + CRF on the example dataset if you haven't changed DATADIR in the main.py.)

Each model subdirectory contains a breakdown of the instructions.

 

Models

Took inspiration from these papers

You can also read this blog post.

Word-vectors are not retrained to avoid any undesirable shift (explanation in these CS224N notes).

The models are tested on the CoNLL2003 shared task.

Training times are provided for indicative purposes only. Obtained on a 2016 13-inch MBPro 3.3 GHz Intel Core i7.

For each model, we run 5 experiments

  • Train on train only
  • Early stopping on testa
  • Select best of 5 on the perfomance on testa (token-level F1)
  • Report F1 score mean and standard deviation (entity-level F1 from the official conlleval script)
  • Select best on testb for reference (but shouldn't be used for comparison as this is just overfitting on the final test set)

In addition, we run 5 other experiments, keeping an Exponential Moving Average (EMA) of the weights (used for evaluation) and report the best F1, mean / std.

As you can see, there's no clear statistical evidence of which of the 2 character-based models is the best. EMA seems to help most of the time. Also, considering the complexity of the models and the relatively small gap in performance (0.6 F1), using the lstm_crf model is probably a safe bet for most of the concrete applications.


 

lstm_crf

Architecture

  1. GloVe 840B vectors
  2. Bi-LSTM
  3. CRF

Related Paper Bidirectional LSTM-CRF Models for Sequence Tagging by Huang, Xu and Yu

Training time ~ 20 min

 traintestatestbPaper, testb
best98.4593.8190.6190.10
best (EMA)98.8294.0690.43 
mean ± std98.85 ± 0.2293.68 ± 0.1290.42 ± 0.10 
mean ± std (EMA)98.71 ± 0.4793.81 ± 0.2490.50 ± 0.21 
abs. best  90.61 
abs. best (EMA)  90.75 

 

chars_lstm_lstm_crf

Architecture

  1. GloVe 840B vectors
  2. Chars embeddings
  3. Chars bi-LSTM
  4. Bi-LSTM
  5. CRF

Related Paper Neural Architectures for Named Entity Recognition by Lample et al.

Training time ~ 35 min

 traintestatestbPaper, testb
best98.8194.3691.0290.94
best (EMA)98.7394.5091.14 
mean ± std98.83 ± 0.2794.02 ± 0.2691.01 ± 0.16 
mean ± std (EMA)98.51 ± 0.2594.20 ± 0.2891.21 ± 0.05 
abs. best  91.22 
abs. best (EMA)  91.28 

 

chars_conv_lstm_crf

Architecture

  1. GloVe 840B vectors
  2. Chars embeddings
  3. Chars 1d convolution and max-pooling
  4. Bi-LSTM
  5. CRF

Related Paper End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF by Ma et Hovy

Training time ~ 35 min

 traintestatestbPaper, testb
best99.1694.5391.1891.21
best (EMA)99.4494.5091.17 
mean ± std98.86 ± 0.3094.10 ± 0.2691.20 ± 0.15 
mean ± std (EMA)98.67 ± 0.3994.29 ± 0.1791.13 ± 0.11 
abs. best  91.42 
abs. best (EMA)  91.22

 

 

https://github.com/guillaumegenthial/tf_ner 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值