Pytorch——Broadcasting and Element-wise Operations with PyTorch

Element-wise tensor operations for deep learning

An element-wise operation is an operation between two tensors that operates on corresponding elements within the respective tensors.元素操作是两个张量之间的操作,它对各自张量内的对应元素进行操作。
An element-wise operation operates on corresponding elements between tensors
Two elements are said to be corresponding if the two elements occupy the same position within the tensor. The position is determined by the indexes used to locate each element.两个元素如果在相同的位置上,则说这两个元素对应。

> t1 = torch.tensor([
    [1,2],
    [3,4]
], dtype=torch.float32)

> t2 = torch.tensor([
    [9,8],
    [7,6]
], dtype=torch.float32)

# Example of the first axis
> print(t1[0])
tensor([1., 2.])


# Example of the second axis
> print(t1[0][0])
tensor(1.)

> t1[0][0]
tensor(1.)


> t2[0][0]
tensor(9.)

This allows us to see that the corresponding element for the 1 in t1 is the 9 in t2.
The correspondence is defined by the indexes.

Addition is an element-wise operation

> t1 + t2
tensor([[10., 10.],
        [10., 10.]])

Arithmetic operations are element-wise operations

> print(t1 + 2)
tensor([[3., 4.],
        [5., 6.]])

> print(t1 - 2)
tensor([[-1.,  0.],
        [ 1.,  2.]])

> print(t1 * 2)
tensor([[2., 4.],
        [6., 8.]])

> print(t1 / 2)
tensor([[0.5000, 1.0000],
        [1.5000, 2.0000]])

> print(t1.add(2))
tensor([[3., 4.],
        [5., 6.]])

> print(t1.sub(2))
tensor([[-1.,  0.],
        [ 1.,  2.]])

> print(t1.mul(2))
tensor([[2., 4.],
        [6., 8.]])

> print(t1.div(2))
tensor([[0.5000, 1.0000],
        [1.5000, 2.0000]])

Broadcasting tensors广播

Broadcasting describes how tensors with different shapes are treated during element-wise operations.
**Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.**广播是一个概念,它的实现允许我们向高维张量添加标量。
We can see what the broadcasted scalar value looks like using the broadcast_to() Numpy function:

> np.broadcast_to(2, t1.shape)
array([[2, 2],
        [2, 2]])

This

> t1 + 2
tensor([[3., 4.],
        [5., 6.]])

is really this:

> t1 + torch.tensor(
    np.broadcast_to(2, t1.shape)
    ,dtype=torch.float32
)
tensor([[3., 4.],
        [5., 6.]])

Another example is here:

t1 = torch.tensor([
    [1,1],
    [1,1]
], dtype=torch.float32)

t2 = torch.tensor([2,4], dtype=torch.float32)

> t1.shape
torch.Size([2, 2])

> t2.shape
torch.Size([2])

The lower rank tensor t2 will be transformed via broadcasting to match the shape of the higher rank tensor t1, and the element-wise operation will be performed as usual.

> np.broadcast_to(t2.numpy(), t1.shape)
array([[2., 4.],
        [2., 4.]], dtype=float32)

> t1 + t2
tensor([[3., 5.],
        [3., 5.]])

Comparison Operations are Element-wise

For a given comparison operation between two tensors, a new tensor of the same shape is returned with each element containing either a torch.bool value of True or False.

> torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])
tensor([True, False, False])

Element-wise Comparison Operation Examples

> t = torch.tensor([
    [0,5,0],
    [6,0,7],
    [0,8,0]
], dtype=torch.float32)

> t.eq(0)  # 是否等于零
tensor([[True, False, True],
        [False, True, False],
        [True, False, True]])


> t.ge(0)  # 大于等于某个值 great/equal
tensor([[True, True, True],
        [True, True, True],
        [True, True, True]])


> t.gt(0)  # 大于某个值,greater than
tensor([[False, True, False],
        [True, False, True],
        [False, True, False]])


> t.lt(0)  # 小于某个值,less than
tensor([[False, False, False],
        [False, False, False],
        [False, False, False]])

> t.le(7)  # 小于等于某个值, less/equal
tensor([[True, True, True],
        [True, True, True],
        [True, False, True]])

Thinking about these operations from a broadcasting perspective, we can see that the last one, t.le(7), is really this:

> t <= torch.tensor(
    np.broadcast_to(7, t.shape)
    ,dtype=torch.float32
)

tensor([[True, True, True],
        [True, True, True],
        [True, False, True]])

# And equivalently this
> t <= torch.tensor([
    [7,7,7],
    [7,7,7],
    [7,7,7]
], dtype=torch.float32)

tensor([[True, True, True],
        [True, True, True],
        [True, False, True]])

Element-wise Operations using Functions

> t.abs() 
tensor([[0., 5., 0.],
        [6., 0., 7.],
        [0., 8., 0.]])


> t.sqrt()
tensor([[0.0000, 2.2361, 0.0000],
        [2.4495, 0.0000, 2.6458],
        [0.0000, 2.8284, 0.0000]])

> t.neg()
tensor([[-0., -5., -0.],
        [-6., -0., -7.],
        [-0., -8., -0.]])

> t.neg().abs()
tensor([[0., 5., 0.],
        [6., 0., 7.],
        [0., 8., 0.]])

Element-wise / Component-wise / Point-wise are meaning the same thing.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

TonyHsuM

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值