【tensorflow 学习】tf.split()和tf.squeeze()

split(
    value,
    num_or_size_splits,
    axis=0,
    num=None,
    name='split'
)

输入:
value: 输入的tensor
num_or_size_splits: 如果是个整数n,就将输入的tensor分为n个子tensor。如果是个tensor T,就将输入的tensor分为len(T)个子tensor。
axis: 默认为0,计算value.shape[axis], 一定要能被num_or_size_splits整除。

举例:

# num_or_size_splits是tensor T,len(T)为3,所以分为3个子tensor,
# axis为1,所以value.shape[1]为30,4+15+11正好为30
split0, split1, split2 = tf.split(value, [4, 15, 11], 1)

tf.shape(split0)  # [5, 4]
tf.shape(split1)  # [5, 15]
tf.shape(split2)  # [5, 11]

# num_or_size_splits是整数3, 分为3个tensor,value.shape[1]为30,能被3整除。
split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
tf.shape(split0)  # [5, 10]

再举个实例:

>>> a=np.reshape(range(24),(4,2,3))
>>> a
array([[[ 0,  1,  2],
        [ 3,  4,  5]],

       [[ 6,  7,  8],
        [ 9, 10, 11]],

       [[12, 13, 14],
        [15, 16, 17]],

       [[18, 19, 20],
        [21, 22, 23]]])

>>> sess=tf.InteractiveSession()
# 将a分为两个tensor,a.shape(1)为2,可以整除,不会报错。
# 输出应该为2个shape为[4,1,3]的tensor
>>> b= tf.split(a,2,1)
>>> b
[<tf.Tensor 'split:0' shape=(4, 1, 3) dtype=int32>, <tf.Tensor 'split:1' shape=(4, 1, 3) dtype=int32>]
>>> sess.run(b)
[array([[[ 0,  1,  2]],

       [[ 6,  7,  8]],

       [[12, 13, 14]],

       [[18, 19, 20]]]), array([[[ 3,  4,  5]],

       [[ 9, 10, 11]],

       [[15, 16, 17]],

       [[21, 22, 23]]])]
>>> c= tf.split(a,2,0)
# a.shape(0)为4,被2整除,输出2个[2,2,3]的Tensor
>>> c
[<tf.Tensor 'split_1:0' shape=(2, 2, 3) dtype=int32>, <tf.Tensor 'split_1:1' shape=(2, 2, 3) dtype=int32>]
>>> sess.run(c)
[array([[[ 0,  1,  2],
        [ 3,  4,  5]],

       [[ 6,  7,  8],
        [ 9, 10, 11]]]), array([[[12, 13, 14],
        [15, 16, 17]],

       [[18, 19, 20],
        [21, 22, 23]]])]
>>>d= tf.split(a,2,2)
#  a.shape(2)为3,不被2整除,报错。
Traceback (most recent call last):
  File "D:\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 671, in _call_cpp_shape_fn_impl
    input_tensors_as_shapes, status)
  File "D:\Anaconda2\envs\tensorflow\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "D:\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension size must be evenly divisible by 2 but is 3
        Number of ways to split should evenly divide the split dimension for 'split_1' (op: 'Split') with input shapes: [], [4,2,3] and with computed input tensors: input[0] = <2>.
>>> d= tf.split(a,3,2)
# 改成3,a.shape(2)为3,整除,不报错,返回3个[4,2,1]的Tensor
>>> d
[<tf.Tensor 'split_2:0' shape=(4, 2, 1) dtype=int32>, <tf.Tensor 'split_2:1' shape=(4, 2, 1) dtype=int32>, <tf.Tensor 'split_2:2' shape=(4, 2, 1) dtype=int32>]
>>> sess.run(d)
[array([[[ 0],
        [ 3]],

       [[ 6],
        [ 9]],

       [[12],
        [15]],

       [[18],
        [21]]]), array([[[ 1],
        [ 4]],

       [[ 7],
        [10]],

       [[13],
        [16]],

       [[19],
        [22]]]), array([[[ 2],
        [ 5]],

       [[ 8],
        [11]],

       [[14],
        [17]],

       [[20],
        [23]]])]

注意:
tf.split和reshape不同,不会改变数值之间的相对顺序。只能每个维度只能变小,不增大。
取值时按照axis-1的顺序来取。

tf.squeeze
squeeze(
    input,
    axis=None,
    name=None,
    squeeze_dims=None
)

去掉维数为1的维度。
举个栗子:

# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
tf.shape(tf.squeeze(t))  # [2, 3]

也可以指定去掉哪个维度:

# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
tf.shape(tf.squeeze(t, [2, 4]))  # [1, 2, 3, 1]
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
以下是基于 TensorFlow1.4 实现的 mmRNN 的代码示例: ```python import tensorflow as tf class mmRNN(object): def __init__(self, num_classes, num_steps, hidden_size, embedding_size, vocab_size, learning_rate): self.inputs = tf.placeholder(tf.int32, [None, num_steps]) self.targets = tf.placeholder(tf.int32, [None, num_classes]) self.batch_size = tf.placeholder(tf.int32, []) with tf.variable_scope("embedding"): embedding = tf.get_variable("embedding", [vocab_size, embedding_size]) inputs = tf.nn.embedding_lookup(embedding, self.inputs) inputs = tf.split(inputs, num_steps, 1) inputs = [tf.squeeze(input_, [1]) for input_ in inputs] with tf.variable_scope("mm_rnn"): W_xh = tf.get_variable("W_xh", [embedding_size, hidden_size]) W_hh = tf.get_variable("W_hh", [hidden_size, hidden_size]) W_hy = tf.get_variable("W_hy", [hidden_size, num_classes]) b_h = tf.get_variable("b_h", [hidden_size]) b_y = tf.get_variable("b_y", [num_classes]) h_t = tf.zeros([self.batch_size, hidden_size]) for t in range(num_steps): x_t = inputs[t] h_t = tf.nn.tanh(tf.matmul(x_t, W_xh) + tf.matmul(h_t, W_hh) + b_h) logits = tf.matmul(h_t, W_hy) + b_y with tf.variable_scope("loss"): self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=self.targets)) with tf.variable_scope("train"): optimizer = tf.train.AdamOptimizer(learning_rate) self.train_op = optimizer.minimize(self.loss) with tf.variable_scope("predict"): self.predictions = tf.argmax(tf.nn.softmax(logits), 1) def train(self, sess, inputs, targets): feed_dict = {self.inputs: inputs, self.targets: targets, self.batch_size: inputs.shape[0]} loss, _ = sess.run([self.loss, self.train_op], feed_dict=feed_dict) return loss def predict(self, sess, inputs): feed_dict = {self.inputs: inputs, self.batch_size: inputs.shape[0]} predictions = sess.run(self.predictions, feed_dict=feed_dict) return predictions ``` 注意,在 TensorFlow1.4 中,`tf.contrib.rnn.MultiRNNCell` 方法已被废弃,因此我们需要手动实现多层 LSTM 或 GRU。在此示例中,我们只使用了单层 LSTM。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值