#loss的shape=[batch,num_step],seqlen的shape = [batch]
def mask(self, loss, seqlen):
mask = tf.sequence_mask(seqlen, maxlen=config.num_steps, dtype=tf.int32)
clear_loss = np.sum(loss * mask) / np.sum(mask)
return clear_loss
该段代码的实现是受下面问答的启发:
Question:
Hi,
Say I want to train some LSTM unit, and my training data has variable lengths with a maximum length of say, 30.
What is the right thing to do?
In TF we cannot dynamically create a computation graph of varied lengths, so the number of LSTM unrolling is fixed.
So do we have to pad everything to have a length of 30?
Let’s say