通过Visualizing Representations来理解Deep Learning、Neural network、以及输入样本自身的高维空间结构...

catalogue

1. 引言
2. Neural Networks Transform Space - 神经网络内部的空间结构
3. Understand the data itself by visualizing high-dimensional input dataset - 输入样本内隐含的空间结构
4. Example 1: Word Embeddings in NLP - text word文本词语串内隐含的空间结构
5. Example 2: Paragraph Vectors in NLP - 词组短语向量化后内部隐含的空间结构
6. Example 3: Visualization images as a compact (4096-dim) feature vector derived from a trained convolutional neural network In t-SNE - 图像CNN抽象后的高维度可视化
7. Example 4: 对网络上AV女星进行聚类分析
8. 对多词的长文本的emberdding向量化表示

 

1. 引言

0x1: 通过词频角度来看NLP问题

本文要讨论的是NLP领域相关问题,我们知道,bags-of-words模型是解决这个问题的一个可选的方案,关于bags-of-words,请参阅下面的链接,我希望在其他文章来专门讨论这个问题,也非常有趣

https://en.wikipedia.org/wiki/Bag-of-words_model

这里我简单描述我理解的bags-of-words模型

1. 将输入样本中的词看成无序list,并且统计它们的词频,frequently词频是这个模型的基础
例如我们有一个包含2句话的输入数据: 
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
切分成无序list为: 
[
    "John",
    "likes",
    "to",
    "watch",
    "movies",
    "Mary",
    "too",
    "also",
    "football",
    "games"
]
统计每个词出现的词频,并得到一个词频map表,基于这个词频map表,对原始输入数据进行词频编码
(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0]
(2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]
这样得到的一个向量化表示就是原始数据在词频维度的高维表示(因为词频考虑了整体样本的情况,某种程度上来说也可以认为为拉伸到高维的)

2. 传统的bags-of-words会遇到一个"停用词问题",例如the/he/逗号/句号这些词的出现频率在任何样本集(黑/白)中都是很高的,为了解决这个问题,出现了tf-idf模型,即同时根据一个词在当前样本集和全量样本集中的比例来评估一个词的重要程度,并根据重要程度来编码词频

3. N-gram模型
传统的bags-of-words问题可以看作是一个1-gram模型,这种做法的缺点就是无法捕获词与词之间的序列关系,所有的词频统计都是单个词的,丢失了很多序列信息,n-gram是一种马尔科夫language modle语言模型。我们还是拿上面的2句话作为例子
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
2-grame切分后得到
[
    "John likes",
    "likes to",
    "to watch",
    "watch movies",
    "Mary likes",
    "likes movies",
    "movies too",
]
我们通过统计2-grame切分后的短语list,来保留一定的原始输入数据内的序列信息

0x2: 通过词向量的空间拓朴维度来看NLP问题

这也是我们本章讨论的重点,我们知道SOM/GNG这样的无监督聚类能够学习(拓印)到输入样本中已经存在的(注意一定要是已经存在的)的空间结构,并通过权重向量的形式把它反应出来。而CNN这种深度神经网络能够识别出输入样本中的"空间纹理结构"。但我们希望能深入了几个问题

1. 为什么神经网络能够对这些数据进行识别并分类?在中间层的内部,到底发生了什么?
2. 神经网络能够做到这件事,是不是因为输入样本本身就包含了一些"可分类信息",只是这种可分类信息隐藏在数据的高维空间层次中,而无法直观发现,而深度神经网络做的就是同样深入到高维空间中去识别这些"可分类信息"的模式

为了达到这个目标,降维可视化技术能够帮助我们深入数据和网络权重向量的内部去观察

Relevant Link:

https://en.wikipedia.org/wiki/Bag-of-words_model 
http://colah.github.io/posts/2015-01-Visualizing-Representations

 

2. Neural Networks Transform Space - 神经网络内部的空间结构

0x1: low-dimensional neural networks – networks which have only two neurons in each layer

我们先从一个[2维的输入样本(输入层是2维的):只包含输入层和输出层且每层只有2个神经元(x/y分别对应输入和输出的坐标点)的神经网络]入手来探查这个问题

这是一张二维平面图(输入层是由2维的点组成的集合),图像的2条曲线分别代表了2个类别,2条曲线上的点代表了我们的输入数据集dataset,我们希望神经网络对这2类进行正确区分,即对分类建模

由于我们的神经网络只有输入层和输出层,每层的神经元都是2,分别代表了x和y,这样神经网络只能尽力去"寻找"一个直线来进行回归分类,因为输入层和输出层之间是线性关系

很显然,这无法得到一个"相对完美"的结果,因为图上我们的输入数据集是线性不可分的,但是如果我们在网络中增加一个hidden layer(3维),通过隐层把输入向量的维度"拉伸"到3维空间,这种拉伸,本质上是给原数据集增加了一个维度的向量,也让我们看到了原本看不到的数据本质含义,从2维空间看线性不可分的情况在3维空间得以解决

在3维空间,得到的有效信息更多了,我们能做的决策也就更加准确

0x2: visualization classifying high-dimensional data

对于1/2/3维的神经网络,我们可以通过直观的观察来获得神经网络工作过程的体验,但是当超过4维后(多分类/复杂决策问题)后,我们就无法通过观察和想象来探查神经网络内部的工作原理了,例如对于MNIST问题来说,输入层的维度是图像的所有像素点,即784维,这个时候,我们就需要借助降维可视化技术,来探查神经网络在不同的层中,是如何通过调整权重向量来逐渐拟合出输入数据在高维空间的真实拓朴含义

我们可以看到,在输入层(784维度)权重向量的空间拓朴分布还是比较"混杂在一起的",但是中hidden layer的训练过程中,由于梯度下降的"强迫调整",高维(至少要超过784维)空间的神经元权重向量将各自的向量调整为"尽量的离彼此远远的",因为距离越远,在之后的激活决策层(例如sigmore)就越容易进行,所得到的loss function也越小

这里我插一句关于为什么hidden layer的神经元个数要超过输入层?的思考和理解

1. 首先我认为神经网络隐藏层的本质是把低维度的输入层"拉伸"到高维度的空间中,去探查高维度空间中数据是否表现出一定的空间拓朴分布
2. 但是我们同时也知道,从低纬度向高维度空间的拉伸是有条件的,高维度降维到低维后,高维度的信息在一定程度上已经丢失了,而低纬度向高维度的拉伸则是需要逆向还原这个失真的部分
3. 联想到数学里常用的一个定理:亮点确定一条直线;两线确定一个平面;两个平面确定一个空间。这就告诉我们低纬度向高维度空间的拉伸需要冗余信息,即多个冗余的低纬度向量可以推导出一个高维度的向量

Relevant Link:

http://blog.csdn.net/unoboros/article/details/30451213
http://www.cnblogs.com/boostable/p/iage_high_space_sphere.html

 

3. Understand the data itself by visualizing high-dimensional input dataset - 输入样本内隐含的空间结构

降维可视化不仅可以帮助我们探查神经网络的内部工作原理,同样也可以帮助我们理解我们的输入样本集的"真实空间拓朴含义",举一个我自己YY的例子,拿一条主机上或者网关产品捕获到的攻击日志来说,把所有字段都聚齐,例如url/ip/port/cmd_line/path/time时间,等等。这些字段是我们看得到的东西,我们设定它们共同组成了一个"10维10元组"。但是允许我来做一个大胆的假设,假设所有的这些攻击事件日志本来都是1024维的,我们看到的10维是它们在1024维度上向下10维度的"投影",这其中已经丢失了一部分可能对我们判断有价值的信息了

想到这里,我认为物理上的高维空间理论对我有一些启发

1. 一个二维空间是包含所有的无限的一维的(例如一条直接上包含了无限的点),三维空间包含了无限所有的二维...等等,在10维空间上,时间的概念已经消失了,如果我们处在10维空间上,我们在某一个"时刻",我们可以同时向所有时间方向做出无限的决定,即我们在某一时刻同时决定画画,唱歌,打球,看书,洗脚
2. 反过来也说明,如果我们要把低维的输入拉伸到高维空间中去进行决策,我们就不能只拿在低维空间中的某一条单独的事件记录,而是应该把整个时间窗口的所有事件都聚合起来,一起拉伸到高维空间中,这种概念常常出现在例如"概率统计"、“聚类”、"聚类分析"

如果一个异常事件/事物我们觉得它和其他所谓的正常事件不可分,这很有可能是因为我们在当前所观察到的低纬度上确实是不可分的,需要把这些事件拉伸到更高维度的空间中,就有更大的可能性寻找到一个"超分类面"

Relevant Link: 

http://cs.stanford.edu/people/karpathy/cnnembed/

 

4. Example 1: Word Embeddings in NLP - text word文本词语串内隐含的空间结构

emberdding是一个非常好的用于拉伸低纬度数据到高维空间的向量化表示方法,它特别适合于处理NLP相关领域问题,当然我觉得入侵检测也可以应用emberdding的思想来解决

在NLP问题中,输入网络的数据样本往往是一个词(word)或者词列表(word list),样本集中的每一个"词"都可以看成一个在高维空间中的向量,向量的原始维度数是所有词的总数,而高维空间的每一个方向就代表了vocabulary词汇表中的一个词,emberdding做的事是把这些高维向量压缩映射到一个相对低维的空间上(一般是100 ~ 1000维),这个压缩映射后的向量空间就叫做 embedding space

但是emberdding向量化中最关键的一个是vocabulary词汇表,这个词汇表是一个权重向量表,vocabulary的生成过程也是根据输入样本进行多轮的训练,让emberdding的权重向量组逐渐去拟合输入数据,这么说有些抽象,我们用一个简单的例子来展示我们是怎么应用emberdding到实际的场景中的

1. 我们有一份训练样本集,假设只有2条记录(已经被切分为word list)
  1) the wall is blue
  2) the wall is red
2. emberdding的目标是从全局的角度,根据最大似然概率的原理将每一个词映射到一个高维空间,而且还要保留原始的语法词义
3. 由于"the wall is blue""the wall is red"都出现在了训练集中,因此在gradient-descent训练过程中,emberdding网络会逐渐调整自己的W权重,使得blue和red在空间位置和方向上保持接近和同向
4. 从这个最简单的例子出发不断推广,我们训练样本中的所有word都会被映射到各自所属的"类别"中,同类相近且同向

在这个word emberdding空间中,每一个词都是一个高维(和emberdding space同维)向量

我们单独看一个词或者一个词组,它只是单纯的词,但是我们转换为emberdding vector之后再看,我们看到的每一个词其实都是综合整个全体数据集之后得到的宏观表现,emberdding vector代表了这个词在整个数据集中和其他数据的相对关系

这种向量化表示带来了一些好处

1. 一个在语义上相近的词在空间分布上也应该相近
2. 语义方向性:emberdding vector不仅有具备相关性,也具备方向相关性,即:v(‘‘woman")−v(‘‘man") ≃v(‘‘woman")−v(‘‘man") ≃ v(‘‘queen")−v(‘‘king")
  1) 语义上相近的词,其方向也一致
  2) 语义上相近的词组,在不同的词组之间的相对距离是接近的

补充一句,emberdding的这种相对距离一致性可以被用来作为语法检测,例如

she is a woman / he is a man -> she is a aunt / he is a uncle
# 这2句都是符合语法的,但是如果出现
he is a aunt,则可以推断这句话不符合语法

0x1: word emberding的空间结构可视化

我们可以基于t-SNE来探查word2vec emberdding内部的工作原理,word2vec emberdding将低维空间的数据映射到高维空间向量中(保持语法不变性),而t-SNE则可以将高维空间的向量在尽量少失真的情况下投影到2/3维空间

需要注意的是,visualization可视化的对象是emberdding的词汇表vocabulary,这个词汇表中包含了word2vec中的所有词的向量(Rn)

# -*- coding: utf-8 -*-

from gensim.models.word2vec import Word2Vec
from sklearn.manifold import TSNE
from sklearn.datasets import fetch_20newsgroups
import re
import matplotlib.pyplot as plt


def clean(text):
    """Remove posting header, split by sentences and words, keep only letters"""
    lines = re.split('[?!.:]\s', re.sub('^.*Lines: \d+', '', re.sub('\n', ' ', text)))
    return [re.sub('[^a-zA-Z]', ' ', line).lower().split() for line in lines]



if __name__ == '__main__':
    # download example data ( may take a while)
    train = fetch_20newsgroups()

    sentences = [line for text in train.data for line in clean(text)]

    model = Word2Vec(sentences, workers=4, size=100, min_count=50, window=10, sample=1e-3)

    print (model.most_similar('memory'))

    X = model[model.wv.vocab]

    tsne = TSNE(n_components=2)
    X_tsne = tsne.fit_transform(X)

    plt.scatter(X_tsne[:, 0], X_tsne[:, 1])
    plt.show()

可以看到,emberdding vocabulary词汇表中,和memory这个计算机词汇相近的词是cpu和cache,符合实际意义的语法

把局部放大看

Relevant Link:

http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
http://www.iro.umontreal.ca/~lisa/pointeurs/turian-wordrepresentations-acl10.pdf
https://stackoverflow.com/questions/43166762/what-is-relation-between-tsne-and-word2vec
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
http://learningaboutdata.blogspot.com/2014/06/plotting-word-embedding-using-tsne-with.html
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
https://stackoverflow.com/questions/43776572/visualise-word2vec-generated-from-gensim
https://www.quora.com/How-do-I-visualise-word2vec-word-vectors
http://nlp.yvespeirsman.be/blog/visualizing-word-embeddings-with-tsne/
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/  
《word2vec_中的数学原理详解》
http://blog.csdn.net/u014595019/article/details/51884529
http://download.csdn.net/detail/mzg12345678/7988741
https://www.tensorflow.org/versions/r0.12/tutorials/word2vec

 

5. Example 2: Paragraph Vectors in NLP - 词组短语向量化后内部隐含的空间结构

Paragraph/Sentence Vector Model的核心思想从一篇document/或者一大段话(sentences)中抽取一部分的短语(Paragraph),常常是文章的标题描述或者引导语之类的字符串,并通过将这段Paragraph映射到词向量空间中,来获得document/sentences的向量表示

Paragraph/Sentence Vector Model模型并不是直接针对一个变长的文本计算vector,而是基于sentence词向量的基础之上,通过从原始sentence中摘取一段Paragraph(类似于摘要),把这段Paragraph看作是一个word(初始化一个权重向量),并通过它周围的context上下文来进行无监督的训练,权重参数的调整通过Gredien decend + BP来完成

0x1: Code

demo.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html


import logging
import sys
import os
from word2vec import Word2Vec, Sent2Vec, LineSentence

logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s" % " ".join(sys.argv))

input_file = './modleTrain/test.txt'
# Emberdding dimension = 100
# The maximum distance between the current and predicted word within a sentence = 5
# Model = CBOW
# Ignore total frequency lower than = 5.
model = Word2Vec(LineSentence(input_file), size=100, window=5, sg=0, min_count=5, workers=8)
model.save(input_file + '.model')
# get word emberdding vector vocabulary
model.save_word2vec_format(input_file + '.vec')

sent_file = './modleTrain/sent.txt'
model = Sent2Vec(LineSentence(sent_file), model_file=input_file + '.model')
model.save_sent2vec_format(sent_file + '.vec')

program = os.path.basename(sys.argv[0])
logging.info("finished running %s" % program)

word2vec.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2013 Radim Rehurek <me@radimrehurek.com>
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html


"""
Deep learning via word2vec's "skip-gram and CBOW models", using either
hierarchical softmax or negative sampling [1]_ [2]_.

The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/
and extended with additional functionality.

For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/

**Install Cython with `pip install cython` to use optimized word2vec training** (70x speedup [3]_).

Initialize a model with e.g.::

>>> model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)

Persist a model to disk with::

>>> model.save(fname)
>>> model = Word2Vec.load(fname)  # you can continue training with the loaded model!

The model can also be instantiated from an existing file on disk in the word2vec C format::

  >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False)  # C text format
  >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.bin', binary=True)  # C binary format

You can perform various syntactic/semantic NLP word tasks with the model. Some of them
are already built-in::

  >>> model.most_similar(positive=['woman', 'king'], negative=['man'])
  [('queen', 0.50882536), ...]

  >>> model.doesnt_match("breakfast cereal dinner lunch".split())
  'cereal'

  >>> model.similarity('woman', 'man')
  0.73723527

  >>> model['computer']  # raw numpy vector of a word
  array([-0.00449447, -0.00310097,  0.02421786, ...], dtype=float32)

and so on.

If you're finished training a model (=no more updates, only querying), you can do

  >>> model.init_sims(replace=True)

to trim unneeded model memory = use (much) less RAM.

.. [1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013.
.. [2] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality.
       In Proceedings of NIPS, 2013.
.. [3] Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/
"""

import logging
import sys
import os
import heapq
import time
from copy import deepcopy
import threading

try:
    from queue import Queue
except ImportError:
    from Queue import Queue

from numpy import exp, dot, zeros, outer, random, dtype, get_include, float32 as REAL, \
    uint32, seterr, array, uint8, vstack, argsort, fromstring, sqrt, newaxis, ndarray, empty, sum as np_sum

# logger = logging.getLogger("gensim.models.word2vec")
logger = logging.getLogger("sent2vec")

# from gensim import utils, matutils  # utility fnc for pickling, common scipy operations etc
import utils, matutils  # utility fnc for pickling, common scipy operations etc
from six import iteritems, itervalues, string_types
from six.moves import xrange

try:
    from gensim_addons.models.word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
except ImportError:
    try:
        # try to compile and use the faster cython version
        import pyximport

        models_dir = os.path.dirname(__file__) or os.getcwd()
        pyximport.install(setup_args={"include_dirs": [models_dir, get_include()]})
        from word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
    except:
        # failed... fall back to plain numpy (20-80x slower training than the above)
        FAST_VERSION = -1


        def train_sentence_sg(model, sentence, alpha, work=None):
            """
            Update skip-gram model by training on a single sentence.

            The sentence is a list of Vocab objects (or None, where the corresponding
            word is not in the vocabulary. Called internally from `Word2Vec.train()`.

            This is the non-optimized, Python version. If you have cython installed, gensim
            will use the optimized version from word2vec_inner instead.

            """
            if model.negative:
                # precompute negative labels
                labels = zeros(model.negative + 1)
                labels[0] = 1.0

            for pos, word in enumerate(sentence):
                if word is None:
                    continue  # OOV word in the input sentence => skip
                reduced_window = random.randint(model.window)  # `b` in the original word2vec code

                # now go over all words from the (reduced) window, predicting each one in turn
                start = max(0, pos - model.window + reduced_window)
                for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start):
                    # don't train on OOV words and on the `word` itself
                    if word2 and not (pos2 == pos):
                        l1 = model.syn0[word2.index]
                        neu1e = zeros(l1.shape)

                        if model.hs:
                            # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
                            l2a = deepcopy(model.syn1[word.point])  # 2d matrix, codelen x layer1_size
                            fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                            ga = (
                                 1 - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                            model.syn1[word.point] += outer(ga, l1)  # learn hidden -> output
                            neu1e += dot(ga, l2a)  # save error

                        if model.negative:
                            # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                            word_indices = [word.index]
                            while len(word_indices) < model.negative + 1:
                                w = model.table[random.randint(model.table.shape[0])]
                                if w != word.index:
                                    word_indices.append(w)
                            l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                            fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                            gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                            model.syn1neg[word_indices] += outer(gb, l1)  # learn hidden -> output
                            neu1e += dot(gb, l2b)  # save error

                        model.syn0[word2.index] += neu1e  # learn input -> hidden

            return len([word for word in sentence if word is not None])


        def train_sentence_cbow(model, sentence, alpha, work=None, neu1=None):
            """
            Update CBOW model by training on a single sentence.

            The sentence is a list of Vocab objects (or None, where the corresponding
            word is not in the vocabulary. Called internally from `Word2Vec.train()`.

            This is the non-optimized, Python version. If you have cython installed, gensim
            will use the optimized version from word2vec_inner instead.

            """
            if model.negative:
                # precompute negative labels
                labels = zeros(model.negative + 1)
                labels[0] = 1.

            for pos, word in enumerate(sentence):
                if word is None:
                    continue  # OOV word in the input sentence => skip
                reduced_window = random.randint(model.window)  # `b` in the original word2vec code
                start = max(0, pos - model.window + reduced_window)
                window_pos = enumerate(sentence[start: pos + model.window + 1 - reduced_window], start)
                word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
                l1 = np_sum(model.syn0[word2_indices], axis=0)  # 1 x layer1_size
                if word2_indices and model.cbow_mean:
                    l1 /= len(word2_indices)
                neu1e = zeros(l1.shape)

                if model.hs:
                    l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size
                    fa = 1. / (1. + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                    ga = (1. - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                    model.syn1[word.point] += outer(ga, l1)  # learn hidden -> output
                    neu1e += dot(ga, l2a)  # save error

                if model.negative:
                    # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                    word_indices = [word.index]
                    while len(word_indices) < model.negative + 1:
                        w = model.table[random.randint(model.table.shape[0])]
                        if w != word.index:
                            word_indices.append(w)
                    l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                    fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                    gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                    model.syn1neg[word_indices] += outer(gb, l1)  # learn hidden -> output
                    neu1e += dot(gb, l2b)  # save error

                model.syn0[word2_indices] += neu1e  # learn input -> hidden, here for all words in the window separately

            return len([word for word in sentence if word is not None])


class Vocab(object):
    """A single vocabulary item, used internally for constructing binary trees (incl. both word leaves and inner nodes)."""

    def __init__(self, **kwargs):
        self.count = 0
        self.__dict__.update(kwargs)

    def __lt__(self, other):  # used for sorting in a priority queue
        return self.count < other.count

    def __str__(self):
        vals = ['%s:%r' % (key, self.__dict__[key]) for key in sorted(self.__dict__) if not key.startswith('_')]
        return "<" + ', '.join(vals) + ">"


class Word2Vec(utils.SaveLoad):
    """
    Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/

    The model can be stored/loaded via its `save()` and `load()` methods, or stored/loaded in a format
    compatible with the original word2vec implementation via `save_word2vec_format()` and `load_word2vec_format()`.

    """

    def __init__(self, sentences=None, size=100, alpha=0.025, window=5, min_count=5,
                 sample=0, seed=1, workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0):
        """
        Initialize the model from an iterable of `sentences`. Each sentence is a
        list of words (unicode strings) that will be used for training.

        The `sentences` iterable can be simply a list, but for larger corpora,
        consider an iterable that streams the sentences directly from disk/network.
        See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in
        this module for such examples.

        If you don't supply `sentences`, the model is left uninitialized -- use if
        you plan to initialize it in some other way.

        `sg` defines the training algorithm. By default (`sg=1`), skip-gram is used. Otherwise, `cbow` is employed.
        `size` is the dimensionality of the feature vectors.
        `window` is the maximum distance between the current and predicted word within a sentence.
        `alpha` is the initial learning rate (will linearly drop to zero as training progresses).
        `seed` = for the random number generator.
        `min_count` = ignore all words with total frequency lower than this.
        `sample` = threshold for configuring which higher-frequency words are randomly downsampled;
                default is 0 (off), useful value is 1e-5.
        `workers` = use this many worker threads to train the model (=faster training with multicore machines)
        `hs` = if 1 (default), hierarchical sampling will be used for model training (else set to 0)
        `negative` = if > 0, negative sampling will be used, the int for negative
                specifies how many "noise words" should be drawn (usually between 5-20)
        `cbow_mean` = if 0 (default), use the sum of the context word vectors. If 1, use the mean.
                Only applies when cbow is used.
        """
        self.vocab = {}  # mapping from a word (string) to a Vocab object
        self.index2word = []  # map from a word's matrix index (int) to word (string)
        self.sg = int(sg)
        self.table = None  # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
        self.layer1_size = int(size)
        if size % 4 != 0:
            logger.warning("consider setting layer size to a multiple of 4 for greater performance")
        self.alpha = float(alpha)
        self.window = int(window)
        self.seed = seed
        self.min_count = min_count
        self.sample = sample
        self.workers = workers
        self.min_alpha = min_alpha
        self.hs = hs
        self.negative = negative
        self.cbow_mean = int(cbow_mean)
        if sentences is not None:
            self.build_vocab(sentences)
            self.train(sentences)

    def make_table(self, table_size=100000000, power=0.75):
        """
        Create a table using stored vocabulary word counts for drawing random words in the negative
        sampling training routines.

        Called internally from `build_vocab()`.

        """
        logger.info("constructing a table with noise distribution from %i words" % len(self.vocab))
        # table (= list of words) of noise distribution for negative sampling
        vocab_size = len(self.index2word)
        self.table = zeros(table_size, dtype=uint32)

        if not vocab_size:
            logger.warning("empty vocabulary in word2vec, is this intended?")
            return

        # compute sum of all power (Z in paper)
        train_words_pow = float(sum([self.vocab[word].count ** power for word in self.vocab]))
        # go through the whole table and fill it up with the word indexes proportional to a word's count**power
        widx = 0
        # normalize count^0.75 by Z
        d1 = self.vocab[self.index2word[widx]].count ** power / train_words_pow
        for tidx in xrange(table_size):
            self.table[tidx] = widx
            if 1.0 * tidx / table_size > d1:
                widx += 1
                d1 += self.vocab[self.index2word[widx]].count ** power / train_words_pow
            if widx >= vocab_size:
                widx = vocab_size - 1

    def create_binary_tree(self):
        """
        Create a binary Huffman tree using stored vocabulary word counts. Frequent words
        will have shorter binary codes. Called internally from `build_vocab()`.

        """
        logger.info("constructing a huffman tree from %i words" % len(self.vocab))

        # build the huffman tree
        heap = list(itervalues(self.vocab))
        heapq.heapify(heap)

        # 每次从当前所有word节点中取最小的两个节点,组成左右子树(左小右大),并将concat结果构成的新节点作为父节点插入huffman树中(从下往上生长,词频越小,越靠近叶子) - 构造huffman二叉树
        for i in xrange(len(self.vocab) - 1):
            min1, min2 = heapq.heappop(heap), heapq.heappop(heap)
            heapq.heappush(heap, Vocab(count=min1.count + min2.count, index=i + len(self.vocab), left=min1, right=min2))

        # recurse over the tree, assigning a binary code to each vocabulary word
        if heap:
            max_depth, stack = 0, [(heap[0], [], [])]
            while stack:
                node, codes, points = stack.pop()
                if node.index < len(self.vocab):
                    # leaf node => store its path from the root
                    node.code, node.point = codes, points
                    max_depth = max(len(codes), max_depth)
                else:
                    # inner node => continue recursion
                    points = array(list(points) + [node.index - len(self.vocab)], dtype=uint32)
                    stack.append((node.left, array(list(codes) + [0], dtype=uint8), points))
                    stack.append((node.right, array(list(codes) + [1], dtype=uint8), points))

            logger.info("built huffman tree with maximum node depth %i" % max_depth)

    def precalc_sampling(self):
        """Precalculate each vocabulary item's threshold for sampling"""
        if self.sample:
            logger.info(
                "frequent-word downsampling, threshold %g; progress tallies will be approximate" % (self.sample))
            total_words = sum(v.count for v in itervalues(self.vocab))
            threshold_count = float(self.sample) * total_words
        # 根据出现次数计算word节点的词频概率
        for v in itervalues(self.vocab):
            prob = (sqrt(v.count / threshold_count) + 1) * (threshold_count / v.count) if self.sample else 1.0
            v.sample_probability = min(prob, 1.0)
            # print v

    def build_vocab(self, sentences):
        """
        Build vocabulary from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        logger.info("collecting all words and their counts")
        sentence_no, vocab = -1, {}
        total_words = 0
        # 统计训练集中每个词的出现次数
        for sentence_no, sentence in enumerate(sentences):
            if sentence_no % 10000 == 0:
                logger.info("PROGRESS: at sentence #%i, processed %i words and %i word types" % (
                sentence_no, total_words, len(vocab)))
            for word in sentence:
                total_words += 1
                if word in vocab:
                    vocab[word].count += 1
                else:
                    vocab[word] = Vocab(count=1)
        logger.info("collected %i word types from a corpus of %i words and %i sentences" % (
        len(vocab), total_words, sentence_no + 1))

        # assign a unique index to each word
        # 按照出现的顺序给每个词index编码(这里没有按照词频排序),使得词汇表vocabulary中的word和index索引可以互相查询转换
        self.vocab, self.index2word = {}, []
        for word, v in iteritems(vocab):
            if v.count >= self.min_count:
                v.index = len(self.vocab)
                self.index2word.append(word)
                self.vocab[word] = v
                # print "word: ", word
                # print "v:", v
        logger.info("total %i word types after removing those with count<%s" % (len(self.vocab), self.min_count))
        # print self.vocab
        # print self.index2word

        # 分层抽样
        if self.hs:
            # add info about each word's Huffman encoding
            self.create_binary_tree()
        if self.negative:
            # build the table for drawing random words (for negative sampling)
            self.make_table()
        # precalculate downsampling thresholds
        self.precalc_sampling()
        self.reset_weights()

    def train(self, sentences, total_words=None, word_count=0, chunksize=100):
        """
        Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        if FAST_VERSION < 0:
            import warnings
            warnings.warn(
                "Cython compilation failed, training will be slow. Do you have Cython installed? `pip install cython`")
        logger.info("training model with %i workers on %i vocabulary and %i features, "
                    "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
                    (self.workers, len(self.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative))

        if not self.vocab:
            raise RuntimeError("you must first build vocabulary before training the model")

        start, next_report = time.time(), [1.0]
        word_count = [word_count]
        total_words = total_words or int(sum(v.count * v.sample_probability for v in itervalues(self.vocab)))
        jobs = Queue(
            maxsize=2 * self.workers)  # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
        lock = threading.Lock()  # for shared state (=number of words trained so far, log reports...)

        def worker_train():
            """Train the model, lifting lists of sentences from the jobs queue."""
            work = zeros(self.layer1_size, dtype=REAL)  # each thread must have its own work memory
            neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL)

            while True:
                job = jobs.get()
                if job is None:  # data finished, exit
                    break
                # update the learning rate before every job
                alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words))
                # how many words did we train on? out-of-vocabulary (unknown) words do not count
                if self.sg:
                    job_words = sum(train_sentence_sg(self, sentence, alpha, work) for sentence in job)
                else:
                    job_words = sum(train_sentence_cbow(self, sentence, alpha, work, neu1) for sentence in job)
                with lock:
                    word_count[0] += job_words
                    elapsed = time.time() - start
                    if elapsed >= next_report[0]:
                        logger.info("PROGRESS: at %.2f%% words, alpha %.05f, %.0f words/s" %
                                    (100.0 * word_count[0] / total_words, alpha,
                                     word_count[0] / elapsed if elapsed else 0.0))
                        next_report[
                            0] = elapsed + 1.0  # don't flood the log, wait at least a second between progress reports

        workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
        for thread in workers:
            thread.daemon = True  # make interrupting the process with ctrl+c easier
            thread.start()

        def prepare_sentences():
            for sentence in sentences:
                # avoid calling random_sample() where prob >= 1, to speed things up a little:
                sampled = [self.vocab[word] for word in sentence
                           if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[
                        word].sample_probability >= random.random_sample())]
                yield sampled

        # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
        for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
            logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
            jobs.put(job)
        logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
        for _ in xrange(self.workers):
            jobs.put(None)  # give the workers heads up that they can finish -- no more work!

        for thread in workers:
            thread.join()

        elapsed = time.time() - start
        logger.info("training on %i words took %.1fs, %.0f words/s" %
                    (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0))

        return word_count[0]

    def reset_weights(self):
        """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
        logger.info("resetting layer weights")
        random.seed(self.seed)
        self.syn0 = empty((len(self.vocab), self.layer1_size), dtype=REAL)
        # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
        for i in xrange(len(self.vocab)):
            self.syn0[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size
        if self.hs:
            self.syn1 = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
        if self.negative:
            self.syn1neg = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
        self.syn0norm = None

    def save_word2vec_format(self, fname, fvocab=None, binary=False):
        """
        Store the input-hidden weight matrix in the same format used by the original
        C word2vec-tool, for compatibility.

        """
        if fvocab is not None:
            logger.info("Storing vocabulary in %s" % (fvocab))
            with utils.smart_open(fvocab, 'wb') as vout:
                for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count):
                    vout.write(utils.to_utf8("%s %s\n" % (word, vocab.count)))
        logger.info("storing %sx%s projection weights into %s" % (len(self.vocab), self.layer1_size, fname))
        assert (len(self.vocab), self.layer1_size) == self.syn0.shape
        with utils.smart_open(fname, 'wb') as fout:
            fout.write(utils.to_utf8("%s %s\n" % self.syn0.shape))
            # store in sorted order: most frequent words at the top
            for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count):
                row = self.syn0[vocab.index]
                if binary:
                    fout.write(utils.to_utf8(word) + b" " + row.tostring())
                else:
                    fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join("%f" % val for val in row))))

    @classmethod
    def load_word2vec_format(cls, fname, fvocab=None, binary=False, norm_only=True):
        """
        Load the input-hidden weight matrix from the original C word2vec-tool format.

        Note that the information stored in the file is incomplete (the binary tree is missing),
        so while you can query for word similarity etc., you cannot continue training
        with a model loaded this way.

        `binary` is a boolean indicating whether the data is in binary word2vec format.
        `norm_only` is a boolean indicating whether to only store normalised word2vec vectors in memory.
        Word counts are read from `fvocab` filename, if set (this is the file generated
        by `-save-vocab` flag of the original C tool).
        """
        counts = None
        if fvocab is not None:
            logger.info("loading word counts from %s" % (fvocab))
            counts = {}
            with utils.smart_open(fvocab) as fin:
                for line in fin:
                    word, count = utils.to_unicode(line).strip().split()
                    counts[word] = int(count)

        logger.info("loading projection weights from %s" % (fname))
        with utils.smart_open(fname) as fin:
            header = utils.to_unicode(fin.readline())
            vocab_size, layer1_size = map(int, header.split())  # throws for invalid file format
            result = Word2Vec(size=layer1_size)
            result.syn0 = zeros((vocab_size, layer1_size), dtype=REAL)
            if binary:
                binary_len = dtype(REAL).itemsize * layer1_size
                for line_no in xrange(vocab_size):
                    # mixed text and binary: read text first, then binary
                    word = []
                    while True:
                        ch = fin.read(1)
                        if ch == b' ':
                            break
                        if ch != b'\n':  # ignore newlines in front of words (some binary files have newline, some don't)
                            word.append(ch)
                    word = utils.to_unicode(b''.join(word))
                    if counts is None:
                        result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
                    elif word in counts:
                        result.vocab[word] = Vocab(index=line_no, count=counts[word])
                    else:
                        logger.warning("vocabulary file is incomplete")
                        result.vocab[word] = Vocab(index=line_no, count=None)
                    result.index2word.append(word)
                    result.syn0[line_no] = fromstring(fin.read(binary_len), dtype=REAL)
            else:
                for line_no, line in enumerate(fin):
                    parts = utils.to_unicode(line).split()
                    if len(parts) != layer1_size + 1:
                        raise ValueError("invalid vector on line %s (is this really the text format?)" % (line_no))
                    word, weights = parts[0], map(REAL, parts[1:])
                    if counts is None:
                        result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
                    elif word in counts:
                        result.vocab[word] = Vocab(index=line_no, count=counts[word])
                    else:
                        logger.warning("vocabulary file is incomplete")
                        result.vocab[word] = Vocab(index=line_no, count=None)
                    result.index2word.append(word)
                    result.syn0[line_no] = weights
        logger.info("loaded %s matrix from %s" % (result.syn0.shape, fname))
        result.init_sims(norm_only)
        return result

    def most_similar(self, positive=[], negative=[], topn=10):
        """
        Find the top-N most similar words. Positive words contribute positively towards the
        similarity, negative words negatively.

        This method computes cosine similarity between a simple mean of the projection
        weight vectors of the given words, and corresponds to the `word-analogy` and
        `distance` scripts in the original word2vec implementation.

        Example::

          >>> trained_model.most_similar(positive=['woman', 'king'], negative=['man'])
          [('queen', 0.50882536), ...]

        """
        self.init_sims()

        if isinstance(positive, string_types) and not negative:
            # allow calls like most_similar('dog'), as a shorthand for most_similar(['dog'])
            positive = [positive]

        # add weights for each word, if not already present; default to 1.0 for positive and -1.0 for negative words
        positive = [(word, 1.0) if isinstance(word, string_types + (ndarray,))
                    else word for word in positive]
        negative = [(word, -1.0) if isinstance(word, string_types + (ndarray,))
                    else word for word in negative]

        # compute the weighted average of all words
        all_words, mean = set(), []
        for word, weight in positive + negative:
            if isinstance(word, ndarray):
                mean.append(weight * word)
            elif word in self.vocab:
                mean.append(weight * self.syn0norm[self.vocab[word].index])
                all_words.add(self.vocab[word].index)
            else:
                raise KeyError("word '%s' not in vocabulary" % word)
        if not mean:
            raise ValueError("cannot compute similarity with no input")
        mean = matutils.unitvec(array(mean).mean(axis=0)).astype(REAL)

        dists = dot(self.syn0norm, mean)
        if not topn:
            return dists
        best = argsort(dists)[::-1][:topn + len(all_words)]
        # ignore (don't return) words from the input
        result = [(self.index2word[sim], float(dists[sim])) for sim in best if sim not in all_words]
        return result[:topn]

    def doesnt_match(self, words):
        """
        Which word from the given list doesn't go with the others?

        Example::

          >>> trained_model.doesnt_match("breakfast cereal dinner lunch".split())
          'cereal'

        """
        self.init_sims()

        words = [word for word in words if word in self.vocab]  # filter out OOV words
        logger.debug("using words %s" % words)
        if not words:
            raise ValueError("cannot select a word from an empty list")
        vectors = vstack(self.syn0norm[self.vocab[word].index] for word in words).astype(REAL)
        mean = matutils.unitvec(vectors.mean(axis=0)).astype(REAL)
        dists = dot(vectors, mean)
        return sorted(zip(dists, words))[0][1]

    def __getitem__(self, word):
        """
        Return a word's representations in vector space, as a 1D numpy array.

        Example::

          >>> trained_model['woman']
          array([ -1.40128313e-02, ...]

        """
        return self.syn0[self.vocab[word].index]

    def __contains__(self, word):
        return word in self.vocab

    def similarity(self, w1, w2):
        """
        Compute cosine similarity between two words.

        Example::

          >>> trained_model.similarity('woman', 'man')
          0.73723527

          >>> trained_model.similarity('woman', 'woman')
          1.0

        """
        return dot(matutils.unitvec(self[w1]), matutils.unitvec(self[w2]))

    def init_sims(self, replace=False):
        """
        Precompute L2-normalized vectors.

        If `replace` is set, forget the original vectors and only keep the normalized
        ones = saves lots of memory!

        Note that you **cannot continue training** after doing a replace. The model becomes
        effectively read-only = you can call `most_similar`, `similarity` etc., but not `train`.

        """
        if getattr(self, 'syn0norm', None) is None or replace:
            logger.info("precomputing L2-norms of word weight vectors")
            if replace:
                for i in xrange(self.syn0.shape[0]):
                    self.syn0[i, :] /= sqrt((self.syn0[i, :] ** 2).sum(-1))
                self.syn0norm = self.syn0
                if hasattr(self, 'syn1'):
                    del self.syn1
            else:
                self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)

    def accuracy(self, questions, restrict_vocab=30000):
        """
        Compute accuracy of the model. `questions` is a filename where lines are
        4-tuples of words, split into sections by ": SECTION NAME" lines.
        See https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt for an example.

        The accuracy is reported (=printed to log and returned as a list) for each
        section separately, plus there's one aggregate summary at the end.

        Use `restrict_vocab` to ignore all questions containing a word whose frequency
        is not in the top-N most frequent words (default top 30,000).

        This method corresponds to the `compute-accuracy` script of the original C word2vec.

        """
        ok_vocab = dict(sorted(iteritems(self.vocab),
                               key=lambda item: -item[1].count)[:restrict_vocab])
        ok_index = set(v.index for v in itervalues(ok_vocab))

        def log_accuracy(section):
            correct, incorrect = section['correct'], section['incorrect']
            if correct + incorrect > 0:
                logger.info("%s: %.1f%% (%i/%i)" %
                            (section['section'], 100.0 * correct / (correct + incorrect),
                             correct, correct + incorrect))

        sections, section = [], None
        for line_no, line in enumerate(utils.smart_open(questions)):
            # TODO: use level3 BLAS (=evaluate multiple questions at once), for speed
            line = utils.to_unicode(line)
            if line.startswith(': '):
                # a new section starts => store the old section
                if section:
                    sections.append(section)
                    log_accuracy(section)
                section = {'section': line.lstrip(': ').strip(), 'correct': 0, 'incorrect': 0}
            else:
                if not section:
                    raise ValueError("missing section header before line #%i in %s" % (line_no, questions))
                try:
                    a, b, c, expected = [word.lower() for word in
                                         line.split()]  # TODO assumes vocabulary preprocessing uses lowercase, too...
                except:
                    logger.info("skipping invalid line #%i in %s" % (line_no, questions))
                if a not in ok_vocab or b not in ok_vocab or c not in ok_vocab or expected not in ok_vocab:
                    logger.debug("skipping line #%i with OOV words: %s" % (line_no, line))
                    continue

                ignore = set(self.vocab[v].index for v in [a, b, c])  # indexes of words to ignore
                predicted = None
                # find the most likely prediction, ignoring OOV words and input words
                for index in argsort(self.most_similar(positive=[b, c], negative=[a], topn=False))[::-1]:
                    if index in ok_index and index not in ignore:
                        predicted = self.index2word[index]
                        if predicted != expected:
                            logger.debug("%s: expected %s, predicted %s" % (line.strip(), expected, predicted))
                        break
                section['correct' if predicted == expected else 'incorrect'] += 1
        if section:
            # store the last section, too
            sections.append(section)
            log_accuracy(section)

        total = {'section': 'total', 'correct': sum(s['correct'] for s in sections),
                 'incorrect': sum(s['incorrect'] for s in sections)}
        log_accuracy(total)
        sections.append(total)
        return sections

    def __str__(self):
        return "Word2Vec(vocab=%s, size=%s, alpha=%s)" % (len(self.index2word), self.layer1_size, self.alpha)

    def save(self, *args, **kwargs):
        kwargs['ignore'] = kwargs.get('ignore', ['syn0norm'])  # don't bother storing the cached normalized vectors
        super(Word2Vec, self).save(*args, **kwargs)


class Sent2Vec(utils.SaveLoad):
    def __init__(self, sentences, model_file=None, alpha=0.025, window=5, sample=0, seed=1,
                 workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0, iteration=1):
        self.sg = int(sg)
        self.table = None  # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
        self.alpha = float(alpha)
        self.window = int(window)
        self.seed = seed
        self.sample = sample
        self.workers = workers
        self.min_alpha = min_alpha
        self.hs = hs
        self.negative = negative
        self.cbow_mean = int(cbow_mean)
        self.iteration = iteration

        if model_file and sentences:
            self.w2v = Word2Vec.load(model_file)
            self.vocab = self.w2v.vocab
            self.layer1_size = self.w2v.layer1_size
            self.reset_sent_vec(sentences)
            for i in range(iteration):
                self.train_sent(sentences)

    def reset_sent_vec(self, sentences):
        """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
        logger.info("resetting vectors for sentences")
        random.seed(self.seed)
        self.sents_len = 0
        for sent in sentences:
            self.sents_len += 1
        self.sents = empty((self.sents_len, self.layer1_size), dtype=REAL)
        # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
        for i in xrange(self.sents_len):
            self.sents[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size

    def train_sent(self, sentences, total_words=None, word_count=0, sent_count=0, chunksize=100):
        """
        Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        logger.info("training model with %i workers on %i sentences and %i features, "
                    "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
                    (self.workers, self.sents_len, self.layer1_size, self.sg, self.hs, self.sample, self.negative))

        if not self.vocab:
            raise RuntimeError("you must first build vocabulary before training the model")

        start, next_report = time.time(), [1.0]
        word_count = [word_count]
        sent_count = [sent_count]
        total_words = total_words or sum(v.count for v in itervalues(self.vocab))
        total_sents = self.sents_len * self.iteration
        jobs = Queue(
            maxsize=2 * self.workers)  # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
        lock = threading.Lock()  # for shared state (=number of words trained so far, log reports...)

        def worker_train():
            """Train the model, lifting lists of sentences from the jobs queue."""
            work = zeros(self.layer1_size, dtype=REAL)  # each thread must have its own work memory
            neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL)

            while True:
                job = jobs.get()
                if job is None:  # data finished, exit
                    break
                    # update the learning rate before every job
                alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words))
                if self.sg:
                    job_words = sum(self.train_sent_vec_sg(self.w2v, sent_no, sentence, alpha, work)
                                    for sent_no, sentence in job)
                else:
                    job_words = sum(self.train_sent_vec_cbow(self.w2v, sent_no, sentence, alpha, work, neu1)
                                    for sent_no, sentence in job)
                with lock:
                    word_count[0] += job_words
                    sent_count[0] += chunksize
                    elapsed = time.time() - start
                    if elapsed >= next_report[0]:
                        logger.info("PROGRESS: at %.2f%% sents, alpha %.05f, %.0f words/s" %
                                    (100.0 * sent_count[0] / total_sents, alpha,
                                     word_count[0] / elapsed if elapsed else 0.0))
                        next_report[
                            0] = elapsed + 1.0  # don't flood the log, wait at least a second between progress reports

        workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
        for thread in workers:
            thread.daemon = True  # make interrupting the process with ctrl+c easier
            thread.start()

        def prepare_sentences():
            for sent_no, sentence in enumerate(sentences):
                # avoid calling random_sample() where prob >= 1, to speed things up a little:
                # sampled = [self.vocab[word] for word in sentence
                #            if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[word].sample_probability >= random.random_sample())]
                sampled = [self.vocab.get(word, None) for word in sentence]
                yield (sent_no, sampled)

        # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
        for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
            logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
            jobs.put(job)
        logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
        for _ in xrange(self.workers):
            jobs.put(None)  # give the workers heads up that they can finish -- no more work!

        for thread in workers:
            thread.join()

        elapsed = time.time() - start
        logger.info("training on %i words took %.1fs, %.0f words/s" %
                    (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0))

        return word_count[0]

    def train_sent_vec_cbow(self, model, sent_no, sentence, alpha, work=None, neu1=None):
        """
        Update CBOW model by training on a single sentence.

        The sentence is a list of Vocab objects (or None, where the corresponding
        word is not in the vocabulary. Called internally from `Word2Vec.train()`.

        This is the non-optimized, Python version. If you have cython installed, gensim
        will use the optimized version from word2vec_inner instead.

        """
        sent_vec = self.sents[sent_no]
        if self.negative:
            # precompute negative labels
            labels = zeros(self.negative + 1)
            labels[0] = 1.

        for pos, word in enumerate(sentence):
            if word is None:
                continue  # OOV word in the input sentence => skip
            reduced_window = random.randint(self.window)  # `b` in the original word2vec code
            start = max(0, pos - self.window + reduced_window)
            window_pos = enumerate(sentence[start: pos + self.window + 1 - reduced_window], start)
            word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
            l1 = np_sum(model.syn0[word2_indices], axis=0)  # 1 x layer1_size
            l1 += sent_vec
            if word2_indices and self.cbow_mean:
                l1 /= len(word2_indices)
            neu1e = zeros(l1.shape)

            if self.hs:
                l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size
                fa = 1. / (1. + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                ga = (1. - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                # model.syn1[word.point] += outer(ga, l1) # learn hidden -> output
                neu1e += dot(ga, l2a)  # save error

            if self.negative:
                # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                word_indices = [word.index]
                while len(word_indices) < self.negative + 1:
                    w = model.table[random.randint(model.table.shape[0])]
                    if w != word.index:
                        word_indices.append(w)
                l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
                neu1e += dot(gb, l2b)  # save error

            # model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately
            self.sents[sent_no] += neu1e  # learn input -> hidden, here for all words in the window separately

        return len([word for word in sentence if word is not None])

    def train_sent_vec_sg(self, model, sent_no, sentence, alpha, work=None):
        """
        Update skip-gram model by training on a single sentence.

        The sentence is a list of Vocab objects (or None, where the corresponding
        word is not in the vocabulary. Called internally from `Word2Vec.train()`.

        This is the non-optimized, Python version. If you have cython installed, gensim
        will use the optimized version from word2vec_inner instead.

        """
        if self.negative:
            # precompute negative labels
            labels = zeros(self.negative + 1)
            labels[0] = 1.0

        for pos, word in enumerate(sentence):
            if word is None:
                continue  # OOV word in the input sentence => skip
            reduced_window = random.randint(model.window)  # `b` in the original word2vec code

            # now go over all words from the (reduced) window, predicting each one in turn
            start = max(0, pos - model.window + reduced_window)
            for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start):
                # don't train on OOV words and on the `word` itself
                if word2:
                    # l1 = model.syn0[word.index]
                    l1 = self.sents[sent_no]
                    neu1e = zeros(l1.shape)

                    if self.hs:
                        # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
                        l2a = deepcopy(model.syn1[word2.point])  # 2d matrix, codelen x layer1_size
                        fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                        ga = (1 - word2.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                        # model.syn1[word2.point] += outer(ga, l1)  # learn hidden -> output
                        neu1e += dot(ga, l2a)  # save error

                    if self.negative:
                        # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                        word_indices = [word2.index]
                        while len(word_indices) < model.negative + 1:
                            w = model.table[random.randint(model.table.shape[0])]
                            if w != word2.index:
                                word_indices.append(w)
                        l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                        fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                        gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                        # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
                        neu1e += dot(gb, l2b)  # save error

                    # model.syn0[word.index] += neu1e  # learn input -> hidden
                    self.sents[sent_no] += neu1e  # learn input -> hidden

        return len([word for word in sentence if word is not None])

    def save_sent2vec_format(self, fname):
        """
        Store the input-hidden weight matrix in the same format used by the original
        C word2vec-tool, for compatibility.

        """
        logger.info("storing %sx%s projection weights into %s" % (self.sents_len, self.layer1_size, fname))
        assert (self.sents_len, self.layer1_size) == self.sents.shape
        with utils.smart_open(fname, 'wb') as fout:
            fout.write(utils.to_utf8("%s %s\n" % self.sents.shape))
            # store in sorted order: most frequent words at the top
            for sent_no in xrange(self.sents_len):
                row = self.sents[sent_no]
                fout.write(utils.to_utf8("sent_%d %s\n" % (sent_no, ' '.join("%f" % val for val in row))))

    def similarity(self, sent1, sent2):
        """
        Compute cosine similarity between two sentences. sent1 and sent2 are
        the indexs in the train file.

        Example::

          >>> trained_model.similarity(0, 0)
          1.0

          >>> trained_model.similarity(1, 3)
          0.73

        """
        return dot(matutils.unitvec(self.sents[sent1]), matutils.unitvec(self.sents[sent2]))


class BrownCorpus(object):
    """Iterate over sentences from the Brown corpus (part of NLTK data)."""

    def __init__(self, dirname):
        self.dirname = dirname

    def __iter__(self):
        for fname in os.listdir(self.dirname):
            fname = os.path.join(self.dirname, fname)
            if not os.path.isfile(fname):
                continue
            for line in utils.smart_open(fname):
                line = utils.to_unicode(line)
                # each file line is a single sentence in the Brown corpus
                # each token is WORD/POS_TAG
                token_tags = [t.split('/') for t in line.split() if len(t.split('/')) == 2]
                # ignore words with non-alphabetic tags like ",", "!" etc (punctuation, weird stuff)
                words = ["%s/%s" % (token.lower(), tag[:2]) for token, tag in token_tags if tag[:2].isalpha()]
                if not words:  # don't bother sending out empty sentences
                    continue
                yield words


class Text8Corpus(object):
    """Iterate over sentences from the "text8" corpus, unzipped from http://mattmahoney.net/dc/text8.zip ."""

    def __init__(self, fname):
        self.fname = fname

    def __iter__(self):
        # the entire corpus is one gigantic line -- there are no sentence marks at all
        # so just split the sequence of tokens arbitrarily: 1 sentence = 1000 tokens
        sentence, rest, max_sentence_length = [], b'', 1000
        with utils.smart_open(self.fname) as fin:
            while True:
                text = rest + fin.read(8192)  # avoid loading the entire file (=1 line) into RAM
                if text == rest:  # EOF
                    sentence.extend(rest.split())  # return the last chunk of words, too (may be shorter/longer)
                    if sentence:
                        yield sentence
                    break
                last_token = text.rfind(
                    b' ')  # the last token may have been split in two... keep it for the next iteration
                words, rest = (
                utils.to_unicode(text[:last_token]).split(), text[last_token:].strip()) if last_token >= 0 else (
                [], text)
                sentence.extend(words)
                while len(sentence) >= max_sentence_length:
                    yield sentence[:max_sentence_length]
                    sentence = sentence[max_sentence_length:]


class LineSentence(object):
    """Simple format: one sentence = one line; words already preprocessed and separated by whitespace."""

    def __init__(self, source):
        """
        `source` can be either a string or a file object.

        Example::

            sentences = LineSentence('myfile.txt')

        Or for compressed files::

            sentences = LineSentence('compressed_text.txt.bz2')
            sentences = LineSentence('compressed_text.txt.gz')

        """
        self.source = source

    def __iter__(self):
        """Iterate through the lines in the source."""
        try:
            # Assume it is a file-like object and try treating it as such
            # Things that don't have seek will trigger an exception
            self.source.seek(0)
            for line in self.source:
                yield utils.to_unicode(line).split()
        except AttributeError:
            # If it didn't work like a file, use it as a string filename
            with utils.smart_open(self.source) as fin:
                for line in fin:
                    yield utils.to_unicode(line).split()


# Example: ./word2vec.py ~/workspace/word2vec/text8 ~/workspace/word2vec/questions-words.txt ./text8
if __name__ == "__main__":
    logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
    logging.info("running %s" % " ".join(sys.argv))
    logging.info("using optimization %s" % FAST_VERSION)

    # check and process cmdline input
    program = os.path.basename(sys.argv[0])
    if len(sys.argv) < 2:
        print(globals()['__doc__'] % locals())
        sys.exit(1)

    seterr(all='raise')  # don't ignore numpy errors

    if len(sys.argv) > 3:
        input_file = sys.argv[1]
        model_file = sys.argv[2]
        out_file = sys.argv[3]
        model = Sent2Vec(LineSentence(input_file), model_file=model_file, iteration=100)
        model.save_sent2vec_format(out_file)
    elif len(sys.argv) > 1:
        input_file = sys.argv[1]
        model = Word2Vec(LineSentence(input_file), size=100, window=5, min_count=5, workers=8)
        model.save(input_file + '.model')
        model.save_word2vec_format(input_file + '.vec')
    else:
        pass

    program = os.path.basename(sys.argv[0])
    logging.info("finished running %s" % program)

Relevant Link:

https://www.zhihu.com/question/21661274
https://fb56552f-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearningworkshopnips2014/68.pdf?attachauth=ANoY7cq83cA2A-ZgTWKF9vIxGRQs96O5OGXbt8n_GqRuU_4IellDNS17z_56Wa6aafihhDHuNHM_7d_jitkT27Cy_RnspiY8Dms5w_eBXFrVBFoFqSdzPmUbHaAblYPGHNA3mCAYn4whKO5w9uk7w9BLyMIX-QNco591gprLzPTM_XHLYa5U2YtIBhVptFj4LMedeKki_hxk2UkHCN0_MwrLwAgZneBihpOAWSX8GgRb5-uqUWpq3CI%3D&attredirects=2
https://www.zhihu.com/question/27689129
https://github.com/hassyGo/paragraph-vector
https://arxiv.org/pdf/1405.4053.pdf 
https://github.com/jiyfeng/ParagraphVector/tree/master/ParaVector
https://github.com/JonathanRaiman/PVDM
https://github.com/thunlp/paragraph2vec
https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/distributed-representations-of-sentences-and-documents.md
https://github.com/klb3713/sentence2vec

 

6. Example 3: Visualization images as a compact (4096-dim) feature vector derived from a trained convolutional neural network In t-SNE - 图像CNN抽象后的高维度可视化

我们知道,在NLP问题中词向量emberdding vector是浅层神经网络训练的副产物,但是我们可以把这个副产物看作是word在高维空间的映射

对于image图像来说也存在类似的情形,我们构建出VGG这种多层神经卷积网络(这个神经网络不需要激活层,因为我们需要直接获取准备输入激活层的权重向量),将图像输入其中进行训练,在网络卷积层的最后一层输出的激活值本质上是一个权重向量,但是我们可以将其看作是该图像在高维空间上的向量化表示,这个向量就是我们需要的image拉伸到高维空间的高维空间向量

def VGG_16():
    model = Sequential()
    model.add(ZeroPadding2D((1, 1), input_shape=(3, 224, 224)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(Flatten())
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(4096, activation='relu')) return model

最后一层输入的activation权重向量,可以直接被当成一个高维向量,输入给t-SNE用于可视化展示,这里我们使用pre-train好的图像vector,以及对应的Caltech-101 dataset

# -*- coding: utf-8 -*-

import os
import random
import numpy as np
import json
import matplotlib.pyplot
import cPickle as pickle
from matplotlib.pyplot import imshow,show
from PIL import Image
from sklearn.manifold import TSNE
from tqdm import tqdm

if __name__ == '__main__':
    images, pca_features = pickle.load(open('../data/features_caltech101.p', 'r'))
    for i, f in zip(images, pca_features):
        print("image: %s, features: %0.2f,%0.2f,%0.2f,%0.2f... " % (i, f[0], f[1], f[2], f[3]))
    # Although in principle, t-SNE works with any number of images, it's difficult to place that many tiles in a single image. So instead, we will take a random subset of 1000 images and plot those on a t-SNE instead. This step is optional.
    num_images_to_plot = 6000

    '''
    It is usually a good idea to first run the vectors through a faster dimensionality reduction technique like principal component analysis to project your data into an intermediate lower-dimensional space before using t-SNE.
    This improves accuracy, and cuts down on runtime since PCA is more efficient than t-SNE. Since we have already projected our data down with PCA in the previous notebook, we can proceed straight to running the t-SNE on the feature vectors.
    '''
    if len(images) > num_images_to_plot:
        sort_order = sorted(random.sample(xrange(len(images)), num_images_to_plot))
        images = [images[i] for i in sort_order]
        pca_features = [pca_features[i] for i in sort_order]

    # Internally, t-SNE uses an iterative approach, making small (or sometimes large) adjustments to the points. By default, t-SNE will go a maximum of 1000 iterations, but in practice, it often terminates early because it has found a locally optimal (good enough) embedding.
    X = np.array(pca_features)
    tsne = TSNE(n_components=2, learning_rate=150, perplexity=30, angle=0.2, verbose=2).fit_transform(X)

    # The variable tsne contains an array of unnormalized 2d points, corresponding to the embedding. In the next cell, we normalize the embedding so that lies entirely in the range (0,1).
    tx, ty = tsne[:, 0], tsne[:, 1]
    tx = (tx - np.min(tx)) / (np.max(tx) - np.min(tx))
    ty = (ty - np.min(ty)) / (np.max(ty) - np.min(ty))

    # Finally, we will compose a new RGB image where the set of images have been drawn according to the t-SNE results.
    # Adjust width and height to set the size in pixels of the full image, and set max_dim to the pixel size (on the largest size) to scale images to.
    width = 4000
    height = 3000
    max_dim = 100

    full_image = Image.new('RGB', (width, height))
    for img, x, y in tqdm(zip(images, tx, ty)):
        tile = Image.open(img)
        rs = max(1, tile.width / max_dim, tile.height / max_dim)
        tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
        full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y)))

    matplotlib.pyplot.figure(figsize=(16, 12))
    imshow(full_image)
    #show()

    # we can save the image to disk:
    full_image.save("../assets/example-tSNE-caltech101.jpg")

从图片上可以看到,摩托车、椅子、飞机、大象被聚类在了一起,这体现了VGG CNN捕获到了这些事物的高维细节信息,t-SNE将这种原理直观地展示出来了

0x2: handwritten digits MNIST t-SNE

# -*- coding: utf-8 -*-

import numpy as np
from skdata.mnist.views import OfficialImageClassification
from matplotlib import pyplot as plt
from tsne import bh_sne

# load up data
data = OfficialImageClassification(x_dtype="float32")
x_data = data.all_images
y_data = data.all_labels

# convert image data to float64 matrix. float64 is need for bh_sne
x_data = np.asarray(x_data).astype('float64')
x_data = x_data.reshape((x_data.shape[0], -1))

# For speed of computation, only run on a subset
n = 2000
x_data = x_data[:n]
y_data = y_data[:n]

# perform t-SNE embedding
vis_data = bh_sne(x_data)

# plot the result
vis_x = vis_data[:, 0]
vis_y = vis_data[:, 1]

plt.scatter(vis_x, vis_y, c=y_data, cmap=plt.cm.get_cmap("jet", 10))
plt.colorbar(ticks=range(10))
plt.clim(-0.5, 9.5)
plt.show()

用10种颜色区分了【0-9】共10个mnist手写数字,t-SNE向我们展示了不同的手写数字在高维空间上存在的空间结构可区分型,这从某种程度上解释了为什么用CNN这种算法能很好地对Mnist手写数字问题进行准确分类的内在原因,即:

只有数据本身内部包含了可区分的因素,应用对应的模型才能完成准确分类这个任务;分类任务中如何对输入数据进行抽象表示,有时候和选取什么模型同样甚至更重要

Relevant Link:

https://github.com/genekogan/image-tSNE
https://indico.io/blog/visualizing-with-t-sne/
https://github.com/oreillymedia/t-SNE-tutorial
https://drive.google.com/drive/folders/0B3WXSfqxKDkFYm9GMzlnemdEbEE
http://www.vision.caltech.edu/Image_Datasets/Caltech101/#Download
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
https://github.com/genekogan/ofxTSNE
http://ml4a.github.io/guides/ImageTSNEViewer/
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
http://ml4a.github.io/guides/ImageTSNELive/
https://github.com/ml4a/ml4a-ofx

 

7. Example 4: 对网络上AV女星进行聚类分析

import argparse
import sys
import numpy as np
import json
import os
from os.path import isfile, join
import keras
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions, preprocess_input
from keras.models import Model
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from scipy.spatial import distance

from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

def process_arguments(args):
    parser = argparse.ArgumentParser(description='tSNE on audio')
    parser.add_argument('--images_path', action='store', help='path to directory of images')
    parser.add_argument('--output_path', action='store', help='path to where to put output json file')
    parser.add_argument('--num_dimensions', action='store', default=2, help='dimensionality of t-SNE points (default 2)')
    parser.add_argument('--perplexity', action='store', default=30, help='perplexity of t-SNE (default 30)')
    parser.add_argument('--learning_rate', action='store', default=150, help='learning rate of t-SNE (default 150)')
    params = vars(parser.parse_args(args))
    return params

def get_image(path, input_shape):
    img = image.load_img(path, target_size=input_shape)
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    return x

def analyze_images(images_path):
    # make feature_extractor
    model = keras.applications.VGG16(weights='imagenet', include_top=True)
    feat_extractor = Model(input=model.input, output=model.get_layer("fc2").output)
    input_shape = model.input_shape[1:3]
    # get images
    candidate_images = [f for f in os.listdir(images_path) if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]
    # analyze images and grab activations
    activations = []
    images = []
    for idx,image_path in enumerate(candidate_images):
        file_path = join(images_path,image_path)
        img = get_image(file_path, input_shape);
        if img is not None:
            print("getting activations for %s %d/%d" % (image_path,idx,len(candidate_images)))
            acts = feat_extractor.predict(img)[0]
            activations.append(acts)
            images.append(image_path)
    # run PCA firt
    print("Running PCA on %d images..." % len(activations))
    features = np.array(activations)
    pca = PCA(n_components=300)
    pca.fit(features)
    pca_features = pca.transform(features)
    return images, pca_features

def run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate):
    images, pca_features = analyze_images(images_path)
    print("Running t-SNE on %d images..." % len(images))
    X = np.array(pca_features)
    tsne = TSNE(n_components=tsne_dimensions, learning_rate=tsne_learning_rate, perplexity=tsne_perplexity, verbose=2).fit_transform(X)
    # save data to json
    data = []
    for i,f in enumerate(images):
        point = [ (tsne[i,k] - np.min(tsne[:,k]))/(np.max(tsne[:,k]) - np.min(tsne[:,k])) for k in range(tsne_dimensions) ]
        data.append({"path":os.path.abspath(join(images_path,images[i])), "point":point})
    with open(output_path, 'w') as outfile:
        json.dump(data, outfile)


if __name__ == '__main__':
    params = process_arguments(sys.argv[1:])
    images_path = params['images_path']
    output_path = params['output_path']
    tsne_dimensions = int(params['num_dimensions'])
    tsne_perplexity = int(params['perplexity'])
    tsne_learning_rate = int(params['learning_rate'])
    run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate)
    print("finished saving %s" % output_path)

从百度上搜索"***",下载1000张图片

python tSNE-images.py --images_path ../data/av/ --output_path ../module/ImageTSNEViewer/av_points.json

用matplotlib展示在一张大图上

# -*- coding: utf-8 -*-
import json
from matplotlib.pyplot import imshow
from matplotlib.pyplot import imshow
import matplotlib.pyplot
from PIL import Image

if __name__ == '__main__':
    # show on Display Board
    width = 4000
    height = 3000
    max_dim = 100

    full_image = Image.new('RGB', (width, height))

    # reading pre-trained image pointer
    with open('../module/ImageTSNEViewer/av_points.json', 'r') as f:
        data = json.load(f)
    for line in data:
        img = line['path']
        x, y = line['point'][0], line['point'][1]
        print img, x, y
        tile = Image.open(img)
        rs = max(1, tile.width / max_dim, tile.height / max_dim)
        tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
        full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y)))

    matplotlib.pyplot.figure(figsize=(16, 12))
    imshow(full_image)

    # we can save the image to disk:
    full_image.save("../assets/example-tSNE-av.jpg")

放大局部细节

可以看到,VGGnet把图像里高维空间的细节信息捕获到了

 

8. 对多词的长文本的emberdding向量化表示

git clone https://github.com/torch/distro.git --recursive
bash install-deps;
./install.sh
source ~/.bashrc

pytorch
pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl 

0x1: A Structured Self-attentive Sentence Embedding

Attention机制最早是在视觉图像领域提出来的,研究的动机其实也是受到人类注意力机制的启发。人们在进行观察图像的时候,其实并不是一次就把整幅图像的每个位置像素都看过,大多是根据需求将注意力集中到图像的特定部分。最具代表性的这篇论文《Recurrent Models of Visual Attention》,下图是这篇论文的核心模型示意图

该模型是在传统的RNN上加入了attention机制(即红圈圈出来的部分),通过attention去学习一幅图像要处理的部分,每次当前状态,都会根据前一个状态学习得到的要关注的位置l和当前输入的图像,去处理注意力部分像素,而不是图像的全部像素。这样的好处就是更少的像素需要处理,减少了任务的复杂度。可以看到图像中应用attention和人类的注意力机制是很类似的 

https://openreview.net/pdf?id=BJC_jUqxe

大致流程如下

1. Running the sentence through an RNN: 
    1) 基于word emberdding将输入sentence进行word向量化编码,得到: S = (w1, w2, . . ., wn) 
    2) 使用双向LSTM处理变长的word vector序列输入,得到hidden state序列: H = (h1, h2, · · · hn),每一个h都由双向的hidden state合并而成

2. learning multiple attention values for each RNN state: 
这里的n是输入数据的变长长度,而我们的目标是生成一个定长的emberdding向量,为达到这个目的,我们引入了self-attention mechanism,
M hidden vectors in H. Computing the linear combination requires the self-attention mechanism. The attention mechanism takes the whole LSTM hidden states H as input, and outputs a vector of weights,1维张量(权重行向量)

3. encouraging each attention vector to focus on different parts of the sentence by adding a penalty term
在一段sentence中,往往不只一个"重点特征点",因此将hidden state扩展到MLP
we need multiple m’s that focus on different parts of the sentence. Thus we need to perform multiple hops of attention. Say we want r different parts to be extracted from the sentence

该方法提出了将一段sentence文本映射为一个emberdding矩阵

Relevant Link:

https://openreview.net/pdf?id=BJC_jUqxe
http://pytorch.org/docs/master/tensors.html
https://github.com/torch/ezinstall
http://torch.ch/docs/getting-started.html
http://www.cnblogs.com/robert-dlut/p/5952032.html
https://github.com/yufengm/SelfAttentive
https://arxiv.org/pdf/1703.03130.pdf
https://nlp.stanford.edu/projects/glove/
https://github.com/Diego999/SelfSent
https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/self_attention_embedding.md

0x2: context2vec
Relevant Link:

https://github.com/orenmel/context2vec
http://u.cs.biu.ac.il/~nlp/resources/downloads/context2vec/
https://www.slideshare.net/BhaskarMitra3/neural-text-embeddings-for-information-retrieval-wsdm-2017
https://github.com/jhlau/doc2vec
https://u.cs.biu.ac.il/~melamuo/publications/context2vec_conll16.pdf

Copyright (c) 2017 LittleHann All rights reserved

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值