NER实战demo--英文文本实体识别

最近在期末考试,就没更,然后本来今天也不想更新的,为了庆祝考完了英语口语,还是勉强更新一下吧。。。
我尽量努力在摸鱼。。。

之前说了好多关于NLP中的NER命名体识别部分,都是理论,而且也是自创数据集,还没有在实际文本中实现过,今天!喜大普奔(今天是个好日子),就写个实例叭,数据依然是从网上扒的,因为我太懒了,一个个标注也太麻烦了吧。。。

好的,事实证明,我是个懒虫
好了,正式开始

1.数据集

1.1 数据集介绍

标注之后的数据集大概是酱紫。。。我们要找的东西是 name,city 和 age,其余没用的部分都标注为O,这是一个BIO格式的标记

I	O
 	O
d	O
o	O
n	O
'	O
t	O
 	O
w	O
a	O
n	O
t	O
 	O
t	O
o	O
 	O
t	O
e	O
l	O
l	O
 	O
y	O
o	O
u	O
 	O
m	O
y	O
 	O
n	O
a	O
m	O
e	O
,	O
 	O
b	O
u	O
t	O
 	O
I	O
 	O
f	O
r	O
o	O
m	O
 	O
K	B-CITY
A	I-CITY
Z	I-CITY
Q	I-CITY
G	I-CITY
L	I-CITY
N	I-CITY
J	I-CITY
.	O
END
I	O
 	O
b	O
o	O
r	O
n	O
 	O
i	O
n	O
 	O
O	B-CITY
F	I-CITY
C	I-CITY
J	I-CITY
B	I-CITY
S	I-CITY
H	I-CITY
.	O
 	O
I	O
'	O
m	O
 	O
5	B-AGE
7	I-AGE
,	O
 	O
j	O
u	O
s	O
t	O
 	O
c	O
a	O
l	O
l	O
 	O
m	O
e	O
 	O
A	B-NAME
r	I-NAME
w	I-NAME
u	I-NAME
.	O
END

标记了之后,我们把文本随机分成训练集,验证集和测试集
即【‘train’, ‘valid’, ‘test’】

1.2 数据处理

定义相关函数处理数据

1.2.1 定义导入函数

## set reading type
def read_file(file_path):
    fileobj = open(file_path, 'r', encoding='utf-8')
    samples = []
    tokens = []
    tags = []

    for content in fileobj:
        content = content.strip('\n')     
        if content == '-DOCSTART- -X- -X- O':
            pass
        elif content == '':
            if len(tokens) != 0:
                samples.append((tokens, tags))
                tokens = []
                tags = []
        else:
            contents = content.split(' ')
            tokens.append(contents[0])
            tags.append(contents[-1])
    return samples

1.2.2 定义get_dicts 函数

将文本进行分词,并将label转化为数字

## divide label and text
# transform label to num
def get_dicts(datas):
    w_all_dict,n_all_dict = {},{}
    for sample in datas:
        for token, tag in zip(*sample):
            if token not in w_all_dict.keys():
                w_all_dict[token] = 1
            else:
                w_all_dict[token] += 1
            
            if tag not in n_all_dict.keys():
                n_all_dict[tag] = 1
            else:
                n_all_dict[tag] += 1

    sort_w_list = sorted(w_all_dict.items(),  key=lambda d: d[1], reverse=True)
    sort_n_list = sorted(n_all_dict.items(),  key=lambda d: d[1], reverse=True)
    w_keys = [x for x,_ in sort_w_list[:15999]]
    w_keys.insert(0,"UNK")

    n_keys = [ x for x,_ in sort_n_list]
    w_dict = { x:i for i,x in enumerate(w_keys) }
    n_dict = { x:i for i,x in enumerate(n_keys) }
    return(w_dict,n_dict)

1.2.3 定义w2num函数

将text(train) 转化为数字,并计算每组的单词个数 n_dict

def w2num(datas,w_dict,n_dict):
    ret_datas = []
    for sample in datas:
        num_w_list,num_n_list = [],[]
        for token, tag in zip(*sample):
            if token not in w_dict.keys():
                token = "UNK"

            if tag not in n_dict:
                tag = "O"

            num_w_list.append(w_dict[token])
            num_n_list.append(n_dict[tag])
        
        ret_datas.append((num_w_list,num_n_list,len(num_n_list)))
    return(ret_datas)

1.2.4 定义len_norm函数

将text , label 补成等长的 array , 长度为80

def len_norm(data_num,lens=80):
    ret_datas = []
    for sample1 in list(data_num):
        sample = list(sample1)
        ls = sample[-1]
        #print(sample)
        while(ls<lens):
            sample[0].append(0)
            ls = len(sample[0])
            sample[1].append(0)
        else:
            sample[0] = sample[0][:lens]
            sample[1] = sample[1][:lens]

        ret_datas.append(sample[:2])
    return(ret_datas)

现在基本已经把处理数据的函数都定义好了,然后我们开始用定义的函数导入并处理数据
首先,要加载一下包包

2. 加载相关包

from tqdm import tqdm
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.models import *
from keras.optimizers  import *
from keras.utils import np_utils
import numpy as np

主要用到keras里面的一些包,包里面的函数的一些相关用法可以自己百度查询一下啊~

3. 数据准备

下面正式导入数据

data_path="./conll2003_v2/"  
data_parts = ['train', 'valid', 'test']
extension = '.txt'
dataset = {}
for data_part in tqdm(data_parts):
    file_path = data_path + data_part + extension
    dataset[data_part] = read_file(str(file_path))
train=dataset['train']
test=dataset['test']
valid=dataset['valid']

## transform train into w_dict, n_dict
w_dict,n_dict = get_dicts(dataset['train'])

# 将text(train) 转化为数字,并计算每组的单词个数 n_dict
data_num = {}
data_num["train"] = w2num(train,w_dict,n_dict)

## 将text , label 补成等长的 array , 长度为80
data_norm = {}
data_norm["train"] = len_norm(data_num["train"])

文件路径自定义,conll2003_v2文件夹里面共有三个文件,分别是’train.txt’, ‘valid.txt’, ‘test.txt’

4. 定义模型

from keras.layers import Embedding ,Bidirectional,LSTM,GRU,TimeDistributed,Dense, BatchNormalization
from keras_contrib.layers import CRF
num_classes=len(n_dict.keys())

model = Sequential() 
model.add(Embedding(16000, 128, input_length=80))
model.add(Bidirectional(GRU(64,return_sequences=True),merge_mode="concat"))
model.add(Bidirectional(GRU(64,return_sequences=True),merge_mode="concat"))
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization(axis=1))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
crf = CRF(len(n_dict.keys()), sparse_target=True)
model.add(crf)

print(model.summary())

opt = Adam(5e-4)
#model.compile('adam', loss=crf.loss_function, metrics=[crf.accuracy])
model.compile(loss=crf.loss_function,optimizer=opt,metrics=[crf.accuracy])

# sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# model.compile(loss=crf.loss_function, optimizer=sgd, metrics=[crf.accuracy])

模型用的是Embedding,加入了两层双向GRU(Bidirectional),增加单词之间的联系,使用了BatchNormalization正则化,提高模型的泛化能力,加入了crf条件随机场,计算loss也用的是crf里面的函数
模型结构具体是这个样子

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 80, 128)           2048000   
_________________________________________________________________
bidirectional_7 (Bidirection (None, 80, 128)           74112     
_________________________________________________________________
bidirectional_8 (Bidirection (None, 80, 128)           74112     
_________________________________________________________________
dense_7 (Dense)              (None, 80, 64)            8256      
_________________________________________________________________
batch_normalization_4 (Batch (None, 80, 64)            320       
_________________________________________________________________
dropout_4 (Dropout)          (None, 80, 64)            0         
_________________________________________________________________
dense_8 (Dense)              (None, 80, 9)             585       
_________________________________________________________________
crf_4 (CRF)                  (None, 80, 9)             189       
=================================================================
Total params: 2,205,574
Trainable params: 2,205,414
Non-trainable params: 160
_________________________________________________________________
None

5. 训练模型

在训练之前,我们要注意的一个很严重的问题是,样本的各个类别的数量非常不均匀,显而易见,O的数量是最多的,由于样本非常不均衡,可能会导致模型全部把文本识别为O,这不是我们想看到的,所以现在做的一件事是,调整每个类别的权重,使得各类的样本基本均衡
可以利用sklearn.utils 里面的 class_weight 函数,计算每一类的class_weight

## set class_weights
import numpy as np
import pandas as pd
train_data = np.array(data_norm["train"])
train_y = train_data[:,1,:]  
np.unique(train_y)
data_list = map(lambda x: x[1], train_y)
train_ser = pd.Series(data_list)

from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',np.unique(train_y),train_ser)
class_weights

计算得到的每一类的权重如下所示:

array([0.15385711, 4.05223665, 1.60011396, 4.5617284 , 2.14595751,
       1.76683025, 6.72461686, 7.91934574, 9.75069444])

接下来正式训练模型

train_x = train_data[:,0,:]  #第一列为x
train_y = train_data[:,1,:]  #第二列为y
train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))  #reshape重组array 3_dim
print(train_y.shape)


## valid set
va_w_dict,va_n_dict = get_dicts(dataset['valid'])
data_num["valid"] = w2num(valid,va_w_dict,va_n_dict)
data_norm["valid"] = len_norm(data_num["valid"])
valid_data = np.array(data_norm["valid"])
valid_x = valid_data[:,0,:]  #第一列为x
valid_y = valid_data[:,1,:]  #第二列为y
valid_y = valid_y.reshape((valid_y.shape[0], valid_y.shape[1], 1))
print(valid_y.shape)

model.fit(x=train_x,y=train_y,epochs=10,batch_size=32,
          class_weight = class_weights,
          verbose=1,
          validation_data=(valid_x, valid_y),
          shuffle=True)  

训练结果如下:

(14041, 80, 1)
(3250, 80, 1)
Train on 14041 samples, validate on 3250 samples
Epoch 1/10
14041/14041 [==============================] - 120s 9ms/step - loss: 1.2785 - crf_viterbi_accuracy: 0.7673 - val_loss: 0.8265 - val_crf_viterbi_accuracy: 0.9671
Epoch 2/10
14041/14041 [==============================] - 115s 8ms/step - loss: 0.5351 - crf_viterbi_accuracy: 0.9697 - val_loss: 0.3710 - val_crf_viterbi_accuracy: 0.9671
Epoch 3/10
14041/14041 [==============================] - 120s 9ms/step - loss: 0.2431 - crf_viterbi_accuracy: 0.9697 - val_loss: 0.2453 - val_crf_viterbi_accuracy: 0.9671
Epoch 4/10
14041/14041 [==============================] - 116s 8ms/step - loss: 0.1549 - crf_viterbi_accuracy: 0.9698 - val_loss: 0.2093 - val_crf_viterbi_accuracy: 0.9669
Epoch 5/10
14041/14041 [==============================] - 127s 9ms/step - loss: 0.1188 - crf_viterbi_accuracy: 0.9713 - val_loss: 0.1980 - val_crf_viterbi_accuracy: 0.9661
Epoch 6/10
14041/14041 [==============================] - 115s 8ms/step - loss: 0.0991 - crf_viterbi_accuracy: 0.9731 - val_loss: 0.1938 - val_crf_viterbi_accuracy: 0.9658
Epoch 7/10
14041/14041 [==============================] - 116s 8ms/step - loss: 0.0860 - crf_viterbi_accuracy: 0.9761 - val_loss: 0.1925 - val_crf_viterbi_accuracy: 0.9653
Epoch 8/10
14041/14041 [==============================] - 113s 8ms/step - loss: 0.0757 - crf_viterbi_accuracy: 0.9800 - val_loss: 0.1943 - val_crf_viterbi_accuracy: 0.9648
Epoch 9/10
14041/14041 [==============================] - 145s 10ms/step - loss: 0.0671 - crf_viterbi_accuracy: 0.9810 - val_loss: 0.1981 - val_crf_viterbi_accuracy: 0.9640
Epoch 10/10
14041/14041 [==============================] - 122s 9ms/step - loss: 0.0603 - crf_viterbi_accuracy: 0.9830 - val_loss: 0.2023 - val_crf_viterbi_accuracy: 0.9616
<keras.callbacks.callbacks.History at 0x21888f37f98>

模型参数较多,训练时间较长,可以自己试一下

6. 模型预测

# save the weigths
# model.load_weights("model.h5")
x=568
pre_y = model.predict(train_x[x:x+1])
# print(pre_y.shape)

# model predict
pre_y = np.argmax(pre_y,axis=-1)
print(pre_y)
print(train_y[x])
# for i in range(0,len(train_y[3:4])):
#     print("label "+str(i),train_y[i])

好了,结束了,终于写完了,呼!

  • 3
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值