(3)张量

It starts with a tensor

NO.4 Size, storage offset, and strides
  • To index into a storage, tensors rely on a few pieces of information that, together with their storage, unequivocally define them: size, storage offset, and stride.
    1
    • The storage offset is the index in the storage that corresponds to the first element in the tensor.(在内存中的索引位置)
      1
    • The stride is the number of elements in the storage that need to be skipped to obtain the next element along each dimension.
      1
    • The size is a tuple indicating how many elements across each dimension the tensor represents.(形状)
      1
  • Accessing an element i, j in a 2D tensor results in accessing the storage_offset + stride[0] * i + stride[1] * j element in the storage. The offset will usually be zero; if this tensor is a view into a storage created to hold a larger tensor the offset might be a positive value.(在内存中位置的索引)
  • This indirection between Tensor and Storage leads some operations, such as transposing a tensor or extracting a subtensor, to be inexpensive, as they don’t lead to memory reallocations; instead, they consist of allocating a new tensor object with a different value for size, storage offset, or stride.(对于张量的变换或者提取,不会重新分配内存)
NO.5 Transpose
f = torch.tensor([[1,2,3],[4,5,6]])
>>> tensor([[1, 2, 3],
  	       [4, 5, 6]])
f.storage()
>>>  1
	 2
	 3
	 4
	 5
	 6
	[torch.LongStorage of size 6]
g = f.t() 		//转置
>>> tensor([[1, 4],
	        [2, 5],
	        [3, 6]])
g.storage()
>>>  1
	 2
	 3
	 4
	 5
	 6
	[torch.LongStorage of size 6]
id(f.storage()) == id(g.storage())
>>> True	        
  • 上面 f 和 g 共享 同样的内存空间
    1
    虽然没有修改底层一维数组,但是新建了一份Tensor元信息,并在新的元信息中的 重新指定 stride。

  • 转置后的size,offset和stride变化
    1

  • Contiguous

    • Contiguous tensors are convenient because you can visit them efficiently and in order without jumping around in the storage.
f.is_contigous()
>>> True
g.is_contigous()
>>> False

在PyTorch中,有一些对Tensor的操作不会真正改变Tensor的内容,改变的仅仅是Tensor中字节位置的索引。这些操作有:
narrow(),view(),expand(),transpose()

x = torch.tensor([[1,2],[3,4],[5,6]])
>>> tensor([[1, 2],
	        [3, 4],
	        [5, 6]])
y = x.t()
>>> tensor([[1, 3, 5],
	        [2, 4, 6]])
x.storage() 或者 y.storage() 输出都为:
>>>  1
	 2
	 3
	 4
	 5
	 6
	[torch.LongStorage of size 6]
由此可以看到:转置后的y里的数值不连续
z = y.contiguous()
>>> tensor([[1, 3, 5],
	        [2, 4, 6]])	 
z.storage()
>>>  1
	 3
	 5
	 2
	 4
	 6
	[torch.LongStorage of size 6]       
* x, y 在内存中引用同一份底层数据,但是stride和shape不同。
* PyTorch中Tensor底层实现是C,也是使用行优先顺序。
  • Notice that the storage has been reshuffled for elements to be laid out row by row in the new storage. The stride has been changed to reflect the new layout.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值