看论文收集所得,觉得非常好,特记录在此
(1) 优化器的交替使用
定义两个优化器,其中一个优化器带有L2正则化,另一个优化器不使用L2正则化:
optim1 = optim.SGD(model.parameters(), lr=rela_config.lr, momentum=rela_config.momentum, weight_decay=rela_config.weight_decay)
optim2 = optim.SGD(model.parameters(), lr=rela_config.lr, momentum=rela_config.momentum)
在训练的过程中,两个优化器交替使用,比如在奇数epoch时选择optim1优化器,在偶数epoch时选用optim2优化器
(2)word_embedding的选用
word_embedding用了两种,一种是利用大规模语料训练得出的word_embedding, 另一种是随机初始化的word_embedding. 两个embedding串接作为当前词的word_embedding.
如图中所示,
e′w
e
w
′
是已经训练得出的word_embedding,
ew
e
w
是随机初始化的word_embedding
该技巧来自与:
End-to-End Neural Relation Extraction with Global Optimization
(3) word_embedding的使用
该方法来自与
Joint Extraction of Entities and Relations Based on a Novel Graph Scheme
(4) word dropout
Given a neural network with n units, dropout prevents overfitting by creating an ensemble of 2
n different networks that share parameters, where each network consists of some combination
of dropped and undropped units. Instead of dropping units, a natural extension for the DAN model is to randomly drop word tokens’ entire word embeddings from the vector average. Using this method, which we call word dropout
该方法来自与:
Deep unordered composition rivals syntactic methods for text classification
未完待续