1)相关文章汇总
http://licstar.NET/archives/328:该文章对从03年的A Neural Probabilistic Language Model,A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning,Three new graphical models for statistical language modelling,Recurrent neural network based language model(即为RNN模型)到12年的Improving Word Representations via Global Context and Multiple Word Prototypes进行了简单的概述以及评价,可惜的是没包含word2vec。
http://blog.csdn.Net/xiaqian0917/article/details/51946582:A Neural Probabilistic Language Model的中文翻译
http://blog.csdn.net/itplus/article/details/37969519:谷歌word2vec的较为详细的介绍,分别采用cbow,skip-gram进行实现,在速度上较为占优。
2)个人理解以及疑惑
在词向量模型上存在很多超参数,最为关注的应该是词向量的维度,多少维最为合理,每一维分别表示什么意义还没有理论指导,相关的论文也没有找到。但在这发现一篇”Word Pepresentation via Gaussian Embedding"这篇工作最大的新意在把词向量从一维表示成了一个Gaussian,所以每个词就带有了一个mean和一个variance,mean表示语义空间的位置,variance表示语义的范畴,越大,语义信息越大,这在知识图谱的上下位关系,从属关系应该会发挥(可惜本人没有重现过该工作,由于知识图谱的原因,以后可能会下)。当然还有其他的,附上拜读
[1] Luke Vilnis, Andrew McCallum. "Word Representations via Gaussian Embedding". ICLR 2015.
[2] Shizhu He, Kang Liu, Guoliang Ji and Jun Zhao. "Learning to Represent Knowledge Graphs with Gaussian Embedding". CIKM 2015.
[3] Xinchi Chen, Xipeng Qiu, Jingxiang Jiang, Xuanjing Huang. "Gaussian Mixture Embeddings for Multiple Word Prototypes". ICLR 2016 submission.
[4] Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun. "Order-Embeddings of Images and Language". ICLR 2016.
[5] Eric Nalisnick, Sachin Ravi. "Learning the Dimensionality of Word Embeddings". arXiv preprint 2017.
[6] Michael Hahn, Frank Keller. "Modeling Human Reading with Neural Attention". EMNLP 2016.
[7] Yiqun Liu, Zeyang Liu, Ke Zhou, Meng Wang, Huanbo Luan, Chao Wang, Min Zhang, Shaoping Ma. "Predicting Search User Examination with Visual Saliency". SIGIR 2016.
[8] John M. Henderson, Svetlana V. Shinkareva, Jing Wang, Steven G. Luke, and Jenn Olejarczyk. "Predicting Cognitive State from Eye Movements". PLoS One. 2013; 8(5): e64937
另一点比较关注的就是doc2vec,毕竟平均词向量忽略了词之间的顺序,这方便可看下http://www.cnblogs.com/maybe2030/p/5427148.html
3)相关运用
采用半年的新闻数据,运用tf搭建w2v训练128维词向量,结果如下:
效果还是蛮好的。