tf.split
tf.split(
value,
num_or_size_splits,
axis=0,
num=None,
name='split'
)
If num_or_size_splits
is an integer, then value
is split along dimension axis
into num_split
smaller tensors. This requires that num_split
evenly divides value.shape[axis]
.
If num_or_size_splits
is a 1-D Tensor (or list), we call it size_splits
and value
is split into len(size_splits)
elements. The shape of the i
-th element has the same size as the value
except along dimension axis
where the size is size_splits[i]
.
Returns:
if num_or_size_splits
is a scalar returns num_or_size_splits
Tensor
objects; if num_or_size_splits
is a 1-D Tensor returns num_or_size_splits.get_shape[0]
Tensor
objects resulting from splitting value
.
tf.squeeze
tf.squeeze(
input,
axis=None,
name=None,
squeeze_dims=None
)
Returns:
A Tensor
. Has the same type as input
. Contains the same data as input
, but has one or more dimensions of size 1 removed.
tf.nn.depth_to_space VS. torch.nn.PixelShuffle
they aplly the same upsample function,which is stream from this paper:https://arxiv.org/abs/1609.05158
tf.nn.depth_to_space(
input,
block_size,
name=None,
data_format='NHWC'
)
For example, given an input of shape [1, 1, 1, 4]
, data_format = "NHWC" and block_size = 2:
x = [[[[1, 2, 3, 4]]]]
This operation will output a tensor of shape:[1,2,2,1]
[[[[1], [2]],
[[3], [4]]]]
Here, the input has a batch of 1 and each batch element has shape [1, 1, 4]
, the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)
). The output element shape is [2, 2, 1]
.