>>> x=torch.empty(5,7,3)>>> y=torch.empty(5,7,3)# same shapes are always broadcastable (i.e. the above rules always hold)>>> x=torch.empty((0,))>>> y=torch.empty(2,2)# x and y are not broadcastable, because x does not have at least 1 dimension# can line up trailing dimensions>>> x=torch.empty(5,3,4,1)>>> y=torch.empty(3,1,1)# x and y are broadcastable.# 1st trailing dimension: both have size 1# 2nd trailing dimension: y has size 1# 3rd trailing dimension: x size == y size# 4th trailing dimension: y dimension doesn't exist# but:>>> x=torch.empty(5,2,4,1)>>> y=torch.empty(3,1,1)# x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3
2,两个张量满足“可广播”条件后,生成的张量大小计算如下:
如果两个张量的维数不相等,则在维数较少的张量的维数前(首)加上1,使它们的长度相等。
对于每个维度大小,生成的维度大小是两个张量在该维度大小的最大值。
# can line up trailing dimensions to make reading easier>>> x=torch.empty(5,1,4,1)>>> y=torch.empty(3,1,1)>>>(x+y).size()
torch.Size([5,3,4,1])# but not necessary:>>> x=torch.empty(1)>>> y=torch.empty(3,1,7)>>>(x+y).size()
torch.Size([3,1,7])>>> x=torch.empty(5,2,4,1)>>> y=torch.empty(3,1,1)>>>(x+y).size()
RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1
注意
In-place操作不允许张量因广播机制改变维度
>>> x=torch.empty(5,3,4,1)>>> y=torch.empty(3,1,1)>>>(x.add_(y)).size()
torch.Size([5,3,4,1])# but:>>> x=torch.empty(1,3,1)>>> y=torch.empty(3,1,7)>>>(x.add_(y)).size()
RuntimeError: The expanded size of the tensor (1) must match the existing size (7) at non-singleton dimension 2.