Embedding的table是可以训练的。当然也可以是是fixed或者用pre-defined的word vector,例如使用GloVe(https://nlp.stanford.edu/projects/glove/)作为初始的参数(注意:weights已经deprecated。现在用embedding_intiator)。
以下摘自官网(https://www.tensorflow.org/tutorials/text/word_embeddings):
‘the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on)’