TORCH.TENSOR
A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.
张量是一个包含单一数据类型元素的多维矩阵。
Torch定义了10种带有CPU和GPU变体的张量类型,如下所示:
Data type | dtype | CPU tensor | GPU tensor |
32-bit floating point |
|
|
|
64-bit floating point |
|
|
|
16-bit floating point 1 |
|
|
|
16-bit floating point 2 |
|
|
|
32-bit complex |
| ||
64-bit complex |
| ||
128-bit complex |
| ||
8-bit integer (unsigned) |
|
|
|
8-bit integer (signed) |
|
|
|
16-bit integer (signed) |
|
|
|
32-bit integer (signed) |
|
|
|
64-bit integer (signed) |
|
|
|
Boolean |
|
|
|
quantized 8-bit integer (unsigned) |
|
| / |
quantized 8-bit integer (signed) |
|
| / |
quantized 32-bit integer (signed) |
|
| / |
quantized 4-bit integer (unsigned) 3 |
|
| / |
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.
有时也称为binary16:使用1个符号,5个指数和10个有效位。当以牺牲距离为代价求精度时很有用。
Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32
有时被称为脑浮点数:使用1个符号,8个指数,7个有效位:Four pieces当range很重要时很有用,因为它的指数位数与float32相同
quantized 4-bit integer is stored as a 8-bit signed integer. Currently it’s only supported in EmbeddingBag operator.
torch.Tensor is an alias for the default tensor type (torch.FloatTensor
).
量化的4位整数存储为8位有符号整数。目前它只支持在embedded bag操作系统。torch.tensor是默认张量类型(torch.FloatTensor)。
初始化和基本操作
张量可以通过使用torch.张量()构造函数从Python列表或序列构造。
import torch
# print(torch.cuda.is_available())
print(torch.tensor([[1., -1.], [1., -1.]]))
print(torch.tensor(np.array([[1, 2, 3], [4, 5, 6]])))
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor([[ 1., -1.],
Traceback (most recent call last):
[ 1., -1.]])
File "D:/session9/test.py", line 4, in <module>
print(torch.tensor(np.array([[1, 2, 3], [4, 5, 6]])))
NameError: name 'np' is not defined
Process finished with exit code 1
出现错误:NameError: name 'np' is not defined
解决办法:使用import导入该模块进行使用,为了方便编写代码,在导入时同时给numpy一个别名。之后在使用该模块时,可以使用缩写的别名进行调用。
import numpy as np
注意:
torch.tensor() always copies data
. If you have a Tensor data
and just want to change its requires_grad
flag, use requires_grad_() or detach() to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor().
A tensor of specific data type can be constructed by passing a torch.dtype and/or a torch.device to a constructor or tensor creation op:
特定数据类型的张量可以通过传递一个torch.dtyper来构造或一个torch.device到构造函数或张量创建操作:
import torch
import numpy as np
# print(torch.cuda.is_available())
print(torch.tensor([[1., -1.], [1., -1.]]))
print(torch.tensor(np.array([[1, 2, 3], [4, 5, 6]])))
print(torch.zeros([2, 4], dtype=torch.int32))#定义一个两行四列的零矩阵
cuda0 = torch.device('cuda:0')
print(torch.ones([2, 4], dtype=torch.float64, device=cuda0))
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor([[ 1., -1.],
[ 1., -1.]])
tensor([[1, 2, 3],
[4, 5, 6]], dtype=torch.int32)
tensor([[0, 0, 0, 0],
[0, 0, 0, 0]], dtype=torch.int32)
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.]], device='cuda:0', dtype=torch.float64)
Process finished with exit code 0
如果你想了解 torch.device(“cuda”) 与 torch.device(“cuda:0”)之间的不同,点击下面链接:
Difference between torch.device("cuda") and torch.device("cuda:0") - PyTorch Forums
有关构建张量的更多信息,请参见 Creation Ops
张量的内容可以使用Python的索引和切片表示法来访问和修改:
import torch
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(x[1][2])
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor(6)
Process finished with exit code 0
import torch
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(x[1][2])
x[0][1] = 8
print(x)
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor(6)
tensor([[1, 8, 3],
[4, 5, 6]])
Process finished with exit code 0
使用torch.Tensor.item()从一个包含单个值的张量中获取一个Python数字:
import torch
x = torch.tensor([[1]])
print(x)
print(x.item())
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor([[1]])
1
Process finished with exit code 0
import torch
x = torch.tensor(2.5)
print(x)
print(x.item())
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor(2.5000)
2.5
Process finished with exit code 0
有关索引的更多信息,请参阅有关索引的更多信息,请参阅Indexing, Slicing, Joining, Mutating Ops
A tensor can be created with requires_grad=True
so that torch.autograd records operations on them for automatic differentiation.
一个张量可以用requires_grade - true创建,这样火炬。Autograd记录对它们的操作,以便自动区分。
x = torch.tensor([[1., -2.], [1., 1.]], requires_grad=True)
out = x.pow(2).sum()
out.backward()
print(x.grad)
运行结果:
D:\conda_env\python.exe D:/session9/test.py
tensor([[ 2., -4.],
[ 2., 2.]])
Process finished with exit code 0
Each tensor has an associated torch.Storage
, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it.
每个张量都有一个相关的torch.Storage,
用来保存数据。张量类还提供了多维的、跨越式的存储视图,并定义了对它的数值运算
备注:有关张量视图的更多信息,请参见 Tensor Views.
请注意了解更多关于torch.dtype, torch.device,和torch.layout。了解 torch.Tensor的属性参见 Tensor Attributes.
注意:改变张量的方法用下划线后缀标记。例如,torch.FloatTensor.abs()会计算che的绝对值并返回修改后的张量,而torch.FloatTensor.abs()会计算一个新张量的结果
说明:改变一个现有张量的torch.device and/or torch.dtype, 考虑在tenson上使用to()方法
警告:目前执行的torch.Tensor引入了内存开销,因此它可能会在有许多微小张量的应用程序中导致意想不到的高内存使用量。如果这是你的情况,考虑使用一个大型结构。
张量类引用
CLASS torch.Tensor
有几种主要的方法来创建一个张量,这取决于你的用例:
-
To create a tensor with pre-existing data, use torch.tensor().
-
To create a tensor with specific size, use
torch.*
tensor creation ops (see Creation Ops). -
To create a tensor with the same size (and similar types) as another tensor, use
torch.*_like
tensor creation ops (see Creation Ops). -
To create a tensor with similar type but different size as another tensor, use
tensor.new_*
creation ops. -
Tensor.
T
Is this Tensor with its dimensions reversed.
If
n
is the number of dimensions inx
,x.T
is equivalent tox.permute(n-1, n-2, ..., 0)
.Returns a new Tensor with
data
as the tensor data.Returns a Tensor of size
size
filled withfill_value
.Returns a Tensor of size
size
filled with uninitialized data.Returns a Tensor of size
size
filled with1
.Returns a Tensor of size
size
filled with0
.Is
True
if the Tensor is stored on the GPU,False
otherwise.Is
True
if the Tensor is quantized,False
otherwise.Is
True
if the Tensor is a meta tensor,False
otherwise.Is the torch.device where this Tensor is.
This attribute is
None
by default and becomes a Tensor the first time a call tobackward()
computes gradients forself
.Alias for dim()
Returns a new tensor containing real values of the
self
tensor.Returns a new tensor containing imaginary values of the
self
tensor.See torch.abs()
In-place version of abs()
Alias for abs()
In-place version of absolute() Alias for
abs_()
See torch.acos()
In-place version of acos()
See torch.arccos()
In-place version of arccos()
Add a scalar or tensor to
self
tensor.In-place version of add()
See torch.addbmm()
In-place version of addbmm()
See torch.addcdiv()
In-place version of addcdiv()
See torch.addcmul()
In-place version of addcmul()
See torch.addmm()
In-place version of addmm()
See torch.sspaddmm()
See torch.addmv()
In-place version of addmv()
See torch.addr()
In-place version of addr()
See torch.allclose()
See torch.amax()
See torch.amin()
See torch.aminmax()
See torch.angle()
Applies the function
callable
to each element in the tensor, replacing each element with the value returned bycallable
.See torch.argmax()
See torch.argmin()
See torch.argsort()
See torch.asin()
In-place version of asin()
See torch.arcsin()
In-place version of arcsin()
See torch.atan()
In-place version of atan()
See torch.arctan()
In-place version of arctan()
See torch.atan2()
In-place version of atan2()
See torch.all()
See torch.any()
Computes the gradient of current tensor w.r.t.
See torch.baddbmm()
In-place version of baddbmm()
Returns a result tensor where each \texttt{result[i]}result[i] is independently sampled from \text{Bernoulli}(\texttt{self[i]})Bernoulli(self[i]).
Fills each location of
self
with an independent sample from \text{Bernoulli}(\texttt{p})Bernoulli(p).self.bfloat16()
is equivalent toself.to(torch.bfloat16)
.See torch.bincount()
In-place version of bitwise_not()
In-place version of bitwise_and()
In-place version of bitwise_or()
In-place version of bitwise_xor()
In-place version of bitwise_left_shift()
In-place version of bitwise_right_shift()
See torch.bmm()
self.bool()
is equivalent toself.to(torch.bool)
.self.byte()
is equivalent toself.to(torch.uint8)
.See torch.broadcast_to().
Fills the tensor with numbers drawn from the Cauchy distribution:
See torch.ceil()
In-place version of ceil()
self.char()
is equivalent toself.to(torch.int8)
.See torch.cholesky()
See torch.chunk()
See torch.clamp()
In-place version of clamp()
Alias for clamp().
Alias for clamp_().
See torch.clone()
Returns a contiguous in memory tensor containing the same data as
self
tensor.Copies the elements from
src
intoself
tensor and returnsself
.See torch.conj()
In-place version of conj_physical()
See torch.copysign()
In-place version of copysign()
See torch.cos()
In-place version of cos()
See torch.cosh()
In-place version of cosh()
See torch.corrcoef()
See torch.cov()
See torch.acosh()
In-place version of acosh()
acosh() -> Tensor
acosh_() -> Tensor
Returns a copy of this object in CPU memory.
See torch.cross()
Returns a copy of this object in CUDA memory.
See torch.cummax()
See torch.cummin()
See torch.cumprod()
In-place version of cumprod()
See torch.cumsum()
In-place version of cumsum()
Returns the address of the first element of
self
tensor.See torch.deg2rad()
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
See torch.det()
Return the number of dense dimensions in a sparse tensor
self
.Returns a new Tensor, detached from the current graph.
Detaches the Tensor from the graph that created it, making it a leaf.
See torch.diag()
See torch.diagflat()
See torch.diagonal()
Fill the main diagonal of a tensor that has at least 2-dimensions.
See torch.fmax()
See torch.fmin()
See torch.diff()
See torch.digamma()
In-place version of digamma()
Returns the number of dimensions of
self
tensor.See torch.dist()
See torch.div()
In-place version of div()
See torch.divide()
In-place version of divide()
See torch.dot()
self.double()
is equivalent toself.to(torch.float64)
.See torch.dsplit()
See torch.eig()
Returns the size in bytes of an individual element.
See torch.eq()
In-place version of eq()
See torch.equal()
See torch.erf()
In-place version of erf()
See torch.erfc()
In-place version of erfc()
See torch.erfinv()
In-place version of erfinv()
See torch.exp()
In-place version of exp()
See torch.expm1()
In-place version of expm1()
Returns a new view of the
self
tensor with singleton dimensions expanded to a larger size.Expand this tensor to the same size as
other
.Fills
self
tensor with elements drawn from the exponential distribution:See torch.fix().
In-place version of fix()
Fills
self
tensor with the specified value.See torch.flatten()
See torch.flip()
See torch.fliplr()
See torch.flipud()
self.float()
is equivalent toself.to(torch.float32)
.In-place version of float_power()
See torch.floor()
In-place version of floor()
In-place version of floor_divide()
See torch.fmod()
In-place version of fmod()
See torch.frac()
In-place version of frac()
See torch.frexp()
See torch.gather()
See torch.gcd()
In-place version of gcd()
See torch.ge().
In-place version of ge().
In-place version of greater_equal().
Fills
self
tensor with elements drawn from the geometric distribution:See torch.geqrf()
See torch.ger()
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides.
See torch.gt().
In-place version of gt().
See torch.greater().
In-place version of greater().
self.half()
is equivalent toself.to(torch.float16)
.See torch.histc()
See torch.hsplit()
See torch.hypot()
In-place version of hypot()
See torch.i0()
In-place version of i0()
See torch.igamma()
In-place version of igamma()
See torch.igammac()
In-place version of igammac()
Accumulate the elements of
alpha
times tensor into theself
tensor by adding to the indices in the order given inindex
.Out-of-place version of torch.Tensor.index_add_().
Copies the elements of tensor into the
self
tensor by selecting the indices in the order given inindex
.Out-of-place version of torch.Tensor.index_copy_().
Fills the elements of the
self
tensor with valuevalue
by selecting the indices in the order given inindex
.Out-of-place version of torch.Tensor.index_fill_().
Puts values from the tensor
values
into the tensorself
using the indices specified inindices
(which is a tuple of Tensors).Out-place version of index_put_().
Return the indices tensor of a sparse COO tensor.
See torch.inner().
self.int()
is equivalent toself.to(torch.int32)
.Given a quantized Tensor,
self.int_repr()
returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.See torch.inverse()
See torch.isclose()
See torch.isfinite()
See torch.isinf()
See torch.isposinf()
See torch.isneginf()
See torch.isnan()
Returns True if
self
tensor is contiguous in memory in the order specified by memory format.Returns True if the data type of
self
is a complex data type.Returns True if the conjugate bit of
self
is set to true.Returns True if the data type of
self
is a floating point data type.See
torch.is_inference()
All Tensors that have
requires_grad
which isFalse
will be leaf Tensors by convention.Returns true if this tensor resides in pinned memory.
Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
Checks if tensor is in shared memory.
Returns True if the data type of
self
is a signed data type.Is
True
if the Tensor uses sparse storage layout,False
otherwise.See torch.istft()
See torch.isreal()
Returns the value of this tensor as a standard Python number.
See torch.kthvalue()
See torch.lcm()
In-place version of lcm()
See torch.ldexp()
In-place version of ldexp()
See torch.le().
In-place version of le().
See torch.less_equal().
In-place version of less_equal().
See torch.lerp()
In-place version of lerp()
See torch.lgamma()
In-place version of lgamma()
See torch.log()
In-place version of log()
See torch.logdet()
See torch.log10()
In-place version of log10()
See torch.log1p()
In-place version of log1p()
See torch.log2()
In-place version of log2()
Fills
self
tensor with numbers samples from the log-normal distribution parameterized by the given mean \muμ and standard deviation \sigmaσ.In-place version of logical_and()
In-place version of logical_not()
In-place version of logical_or()
In-place version of logical_xor()
See torch.logit()
In-place version of logit()
self.long()
is equivalent toself.to(torch.int64)
.See torch.lstsq()
See torch.lt().
In-place version of lt().
lt(other) -> Tensor
In-place version of less().
See torch.lu()
See torch.lu_solve()
Makes a
cls
instance with the same data pointer asself
.Applies
callable
for each element inself
tensor and the given tensor and stores the results inself
tensor.Copies elements from
source
intoself
tensor at positions where themask
is True.Out-of-place version of torch.Tensor.masked_scatter_()
Fills elements of
self
tensor withvalue
wheremask
is True.Out-of-place version of torch.Tensor.masked_fill_()
See torch.matmul()
NOTE
matrix_power() is deprecated, use torch.linalg.matrix_power() instead.
See torch.max()
See torch.maximum()
See torch.mean()
See torch.nanmean()
See torch.median()
See torch.min()
See torch.minimum()
See torch.mm()
See torch.smm()
See torch.mode()
See torch.movedim()
See torch.moveaxis()
See torch.msort()
See torch.mul().
In-place version of mul().
See torch.multiply().
In-place version of multiply().
See torch.mv()
See torch.mvlgamma()
In-place version of mvlgamma()
See torch.nansum()
See torch.narrow()
Same as Tensor.narrow() except returning a copy rather than shared storage.
Alias for dim()
See torch.nan_to_num().
In-place version of nan_to_num().
See torch.ne().
In-place version of ne().
See torch.not_equal().
In-place version of not_equal().
See torch.neg()
In-place version of neg()
See torch.negative()
In-place version of negative()
Alias for numel()
In-place version of nextafter()
See torch.nonzero()
See torch.norm()
Fills
self
tensor with elements samples from the normal distribution parameterized by mean and std.See torch.numel()
Returns
self
tensor as a NumPyndarray
.See torch.orgqr()
See torch.ormqr()
See torch.outer().
See torch.permute()
Copies the tensor to pinned memory, if it’s not already pinned.
See torch.pinverse()
In-place version of polygamma()
See torch.positive()
See torch.pow()
In-place version of pow()
See torch.prod()
Copies the elements from
source
into the positions specified byindex
.See torch.qr()
Returns the quantization scheme of a given QTensor.
See torch.quantile()
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer.
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer.
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
See torch.rad2deg()
Fills
self
tensor with numbers sampled from the discrete uniform distribution over[from, to - 1]
.see torch.ravel()
In-place version of reciprocal()
Ensures that the tensor memory is not reused for another tensor until all current work queued on
stream
are complete.Registers a backward hook.
In-place version of remainder()
See torch.renorm()
In-place version of renorm()
Repeats this tensor along the specified dimensions.
Is
True
if gradients need to be computed for this Tensor,False
otherwise.Change if autograd should record operations on this tensor: sets this tensor’s
requires_grad
attribute in-place.Returns a tensor with the same data and number of elements as
self
but with the specified shape.Returns this tensor as the same shape as
other
.Resizes
self
tensor to the specified size.Resizes the
self
tensor to be the same size as the specified tensor.Enables this Tensor to have their
grad
populated duringbackward()
.Is
True
if this Tensor is non-leaf and itsgrad
is enabled to be populated duringbackward()
,False
otherwise.See torch.roll()
See torch.rot90()
See torch.round()
In-place version of round()
See torch.rsqrt()
In-place version of rsqrt()
Out-of-place version of torch.Tensor.scatter_()
Writes all values from the tensor
src
intoself
at the indices specified in theindex
tensor.Adds all values from the tensor
other
intoself
at the indices specified in theindex
tensor in a similar fashion as scatter_().Out-of-place version of torch.Tensor.scatter_add_()
Slices the
self
tensor along the selected dimension at the given index.Sets the underlying storage, size, and strides.
Moves the underlying storage to shared memory.
self.short()
is equivalent toself.to(torch.int16)
.See torch.sigmoid()
In-place version of sigmoid()
See torch.sign()
In-place version of sign()
See torch.signbit()
See torch.sgn()
In-place version of sgn()
See torch.sin()
In-place version of sin()
See torch.sinc()
In-place version of sinc()
See torch.sinh()
In-place version of sinh()
See torch.asinh()
In-place version of asinh()
See torch.arcsinh()
In-place version of arcsinh()
Returns the size of the
self
tensor.See torch.slogdet()
See torch.solve()
See torch.sort()
See torch.split()
Returns a new sparse tensor with values from a strided tensor
self
filtered by the indices of the sparse tensormask
.Return the number of sparse dimensions in a sparse tensor
self
.See torch.sqrt()
In-place version of sqrt()
See torch.square()
In-place version of square()
See torch.squeeze()
In-place version of squeeze()
See torch.std()
See torch.stft()
Returns the underlying storage.
Returns
self
tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).Returns the type of the underlying storage.
Returns the stride of
self
tensor.See torch.sub().
In-place version of sub()
See torch.subtract().
In-place version of subtract().
See torch.sum()
Sum
this
tensor tosize
.See torch.svd()
See torch.swapaxes()
See torch.swapdims()
See torch.symeig()
See torch.t()
In-place version of t()
See torch.tile()
Performs Tensor dtype and/or device conversion.
Returns a copy of the tensor in
torch.mkldnn
layout.See torch.take()
See torch.tan()
In-place version of tan()
See torch.tanh()
In-place version of tanh()
See torch.atanh()
In-place version of atanh()
See torch.arctanh()
In-place version of arctanh()
Returns the tensor as a (nested) list.
See torch.topk()
Returns a sparse copy of the tensor.
See torch.trace()
In-place version of transpose()
See torch.tril()
In-place version of tril()
See torch.triu()
In-place version of triu()
In-place version of true_divide_()
See torch.trunc()
In-place version of trunc()
Returns the type if dtype is not provided, else casts this object to the specified type.
Returns this tensor cast to the type of the given tensor.
See torch.unbind()
Returns a view of the original tensor which contains all slices of size
size
fromself
tensor in the dimensiondimension
.Fills
self
tensor with numbers sampled from the continuous uniform distribution:Returns the unique elements of the input tensor.
Eliminates all but the first element from every consecutive group of equivalent elements.
In-place version of unsqueeze()
Return the values tensor of a sparse COO tensor.
See torch.var()
See torch.vdot()
Returns a new tensor with the same data as the
self
tensor but of a differentshape
.View this tensor as the same size as
other
.See torch.vsplit()
self.where(condition, y)
is equivalent totorch.where(condition, self, y)
.See torch.xlogy()
In-place version of xlogy()
Fills
self
tensor with zero