Autoencoding neural models can be used to extract word representations, such as BERT and Autoencoding neural models can be used to extract word representations, such as BERT and VAE. 自动编码神经模型可以用来提取单词表示,如BERT和VAE。
For BERT, when randomly masking out a part of input X words, pre-training task is to predict those masked words based on the context words. 对于BERT,当随机屏蔽掉部分输入的X个单词时,预训练任务是根据上下文单词预测这些被屏蔽的单词。
For VAE, it can maps inputs into lower dimension space and then sample z from z ~ N(μ(h),σ(h)) and remaps z to x ̂. 对于VAE,它可以将输入映射到低维空间,然后从z~N(μ(h),σ(h))采样z,再将z映射到x̂。