Tensorflow(五) —— Tensor的broadcast_to操作
1 broadcast_to介绍
1.1 key idea
- insert into 1 dim ahead if needed
- expand dims with size 1 to same size
- 即先升维再扩张到目标数据位数
- broadcast 没有进行数据复制,但却完成相应数据的运算。
1.2how to understand
- when it has no axis, create a new concept(axis)
- when it has dim of size 1,
1.3 why broadcasting
- for real demading
- memory consumption
1.4 broadcastable
- Match from last dim 最右边必须对齐 位数相同或者为1
- if current dim=1,expand to same
- if either has no dim, insert one dim and expand to same
- otherwise, Not broadcastable
[4,32,14,14]
[2,32,14,14]
not broadcastable
2 隐式
"""
a = tf.random.uniform([4,28,28,3])
b = tf.constant(5.)
print("a+b:",(a+b).shape)
c = tf.ones([1])
print("a+c:",(a+c).shape)
d = tf.ones([28,28,3])
print("a+d:",(a+d).shape)
e = tf.ones([2,28,28,3])
try:
print("a+e",(a+e).shape)
except Exception as error:
print("a+e:",error)
3 显式
"""
a = tf.random.uniform([4,28,28,3])
b = tf.random.normal([4,1,1,1])
print("a+b:",(a+b).shape)
# 使用broadcast_to实现
c = tf.broadcast_to(b,a.shape)
print("c:",c.shape)
4 tf.broadcast_to VS tf.tile
"""
a = tf.ones([3,4])
b = tf.broadcast_to(a,[2,3,4])
print("b:",b.shape)
c_temp = tf.expand_dims(a,axis = 0)
c = tf.tile(c_temp,[2,1,1]) # []中数字表示该轴复制几次
print("c:",c.shape)
本文为参考龙龙老师的“深度学习与TensorFlow 2入门实战“课程书写的学习笔记
by CyrusMay 2022 04 06