tf axis = 1

总是搞不清楚在axis上加减的结果,例子如下:

import tensorflow as tf
from scipy.spatial.distance import pdist, squareform


class EmbeddingTable(object):
    def __init__(self, ini):
        self.embedding_table = tf.get_variable(name="embedding",
                                               initializer=ini, trainable=True)

    def get_shape(self):
        return self.embedding_table.get_shape()

    def embed_words(self, words):
        """

        :param words:  padding 之后的id
        :return:
        """
        emb = tf.nn.embedding_lookup(self.embedding_table, words)
        return emb


ini = [[1, 0, 0], [0, 1, 1], [0, 0, 1]]
embed_table = EmbeddingTable(ini)

word = tf.placeholder(dtype=tf.int32, shape=(None, None))
mm = tf.placeholder(dtype=tf.int32, shape=(None, None))
x_op = embed_table.embed_words(word)

xx = [[1, 2],
      [0, 1]]

mask = [[1, 0],
        [0, 1]]

xx_result = tf.reduce_sum(xx, axis=1)
multi_op = x_op*mm[:, :, None]
n = tf.reduce_sum(multi_op, axis=0)

ini_op = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(ini_op)
    # m = sess.run(x_op, feed_dict={word:xx})

    # x_, bb, x_m = sess.run([x_op,multi_op ,n], feed_dict={mm: mask, word: xx})
    x_ = sess.run(xx_result, feed_dict={mm: mask, word: xx})  #[3 1]


#########
xx_result = tf.reduce_sum(xx, axis=0)  
x_ = sess.run(xx_result, feed_dict={mm: mask, word: xx})     #[1, 3]
little detail
  1. 求矩阵的转置的时候,没有tenor.T 这种写法的;
    tf.transpose(tensor)
  2. Tensor 之间的运算规则
    相同大小 Tensor 之间的任何算术运算都会将运算应用到元素级
    不同大小 Tensor(要求dimension 0 必须相同) 之间的运算叫做广播(broadcasting)
    Tensor 与 Scalar(0维 tensor) 间的算术运算会将那个标量值传播到各个元素
    Note: TensorFLow 在进行数学运算时,一定要求各个 Tensor 数据类型一致

tensor 之间的数学运算可以参考一下几点:
https://blog.csdn.net/zywvvd/article/details/78593618

sentence embedding:
A Simple Language Model based Evaluator for Sentence Compression.
Exploring Semantic Properties of Sentence Embeddings
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation. Tiancheng Zhao, Kyusong Lee and Maxine Eskenazi.
Sentence-State LSTM for Text Representation. Yue Zhang, Qi Liu and Linfeng Song.
Subword-level Word Vector Representations for Korean. Sungjoon Park, Jeongmin Byun, Sion Baek, Yongseok Cho and Alice Oh.
hyperdoc2vec: Distributed Representations of Hypertext Documents. Jialong Han, Yan Song, Wayne Xin Zhao, Shuming Shi and Haisong Zhang.

tensorflow API

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值