a=torch.rand(4,3,2)
b=a[:,:,1]
c=a[:,1]
print(a)
tensor([[[0.5200, 0.9955],
[0.3993, 0.1790],
[0.4560, 0.8735]],
[[0.6110, 0.7368],
[0.4449, 0.4358],
[0.0840, 0.5496]],
[[0.4827, 0.4580],
[0.1750, 0.7478],
[0.3505, 0.0089]],
[[0.4543, 0.4423],
[0.2201, 0.3439],
[0.0718, 0.4791]]])
print(b.size())
torch.Size([4, 3])
print(b)
tensor([[0.9955, 0.1790, 0.8735],
[0.7368, 0.4358, 0.5496],
[0.4580, 0.7478, 0.0089],
[0.4423, 0.3439, 0.4791]])
print(c.size())
torch.Size([4, 2])
print(c)
tensor([[0.3993, 0.1790],
[0.4449, 0.4358],
[0.1750, 0.7478],
[0.2201, 0.3439]])
d=[:,1,:] # 这个例子中,其实a[:,1]=a[:,1,:],即c=d
print(d)
tensor([[0.3993, 0.1790],
[0.4449, 0.4358],
[0.1750, 0.7478],
[0.2201, 0.3439]])
【pytorch】tensor的维度索引,a[:,:,1]与a[:,1]的区别
最新推荐文章于 2023-01-19 19:50:15 发布
本文通过实例解析了TensorFlow中a[:,1]与a[:,1,:]的区别,展示了如何使用切片和索引来操作张量,并探讨了视图的概念。重点讲解了如何在PyTorch中高效处理三维数组并演示了相关操作的结果。
摘要由CSDN通过智能技术生成