张量创建
一、直接创建
torch.tensor()
def tensor(data: Any, dtype: Optional[_dtype] = None, device: Device = None, requires_grad: _bool = False) -> Tensor
data:数据,list,numpy
dtype:数据类型和data一致
device:设备(cpu、cuda)
requires_grad:是否需要梯度
pin_memory:锁页内存
torch.from_numpy(ndarry)
def from_numpy(ndarray)
生成tensor和ndarry共享内存,地址不同,但是改变生成的tensor(ndarry)会改变ndarry(生成的tensor)。这是因为tensor保存的是dtype、device、requires_grad、pin_memory等信息,调用数据的时候还是在ndarry的地址里。
二、依据数值创建
torch.zeros()
def zeros(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False)
size:输出张量的形状
out:输出的张量 torch.zeros((3,3), out = a)相当于a = torch.zeros((3,3))
layout:内存中的布局
torch.zores_like()
def zeros_like(input: Tensor, *, memory_format: Optional[memory_format] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False)
input:生成和input相同形状的全0张量
torch.ones()
torch.ones_like()
torch.full()
torch.full_like()
和torch.zeros()同理
def full(size: _size, fill_value: Union[Number, _complex], *, out: Optional[Tensor] = None, layout: _layout = strided, dtype: Optional[_dtype] = None, device: Device = None, requires_grad: _bool = False)
fill_value:张量的值
torch.arange()
创建等差一维张量
def arange(start: Number, end: Number, step: Number, *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, device: Device = None, requires_grad: _bool = False)
start:开始
end:结束 [start,end)
step:步长,默认为1
torch.linspace()
创建均分的一维张量
def linspace(start: Number, end: Number, steps: Optional[_int] = None, *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, device: Device = None, requires_grad: _bool = False)
steps:输出的张量里包含的个数,torch.linspace()是[start,end]
torch.logspace()
创建对数均分的一维张量
def logspace(start: Number, end: Number, steps: Optional[_int] = None, base: _float = 10.0, *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, device: Device = None, requires_grad: _bool = False)
base:对数的底
torch.eye()
创建单位矩阵
def eye(n: Union[_int, SymInt], *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False)
n:行,列 可以在输入是def(行,列)
三、依据概率分布创建
torch.normal()
def normal(mean: Tensor, std: Tensor, *, generator: Optional[Generator] = None, out: Optional[Tensor] = None)
分为4种情况:
mean std
标 标
标 张
张 标
张 张
输出为符合mean,std中的一个值
比如:都是张量时,需要自己定义size,挑选4个符合mean,std中的值,原理就是把mean,std广播四次每次挑选一个值
mean:张,std:标 相当于std根据mean的形状广播四次,每次挑选符合条件的一个值
mean:标,std:张 同理
mean:张,std:张 mean,std一一对应选取符合条件的值
torch.rand()
torch.rand_like()
在区间[0,1)生成均匀分布
和torch.zeros()输入参数相同
torch.randint()
torch.randint_like()
在区间[low,high)上生成均匀分布
torch.randperm()
def randperm(n: Union[_int, SymInt], *, generator: Optional[Generator], out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False)
n:张量长度,生成[0,n-1]的整数随机序列
torch.bernoulli()
def bernoulli(input: Tensor, *, generator: Optional[Generator] = None, out: Optional[Tensor] = None)
生成伯努利分布(0,1两点分布)
张量操作
一、拼接与拆分
torch.cat()
def cat(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None)
tensors:要拼接的张量
dim:要拼接的维度,不创建新维度
torch.stack()
def stack(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None)
dim:要拼接的维度,创建新维度
torch.chunk()
def chunk(input: Tensor, chunks: _int, dim: _int = 0) -> List[Tensor]
input:输入是tensor
chunks:被分割后的份数
dim:要分割的维度
输出是tuple或list
torch.split()
def split( tensor: Tensor, split_size_or_sections: Union[int, List[int]], dim: int = 0 ) -> Tuple[Tensor, ...]:
tensor:输入是tensor
split_size_or_sections:分割后元素的步长,可以是整数(有余数直接输出余数)
split_size_or_sections:也可以直接按列表中元素代表的个数进行分割
输出为tuple
二、张量索引
torch.index_select()
def index_select(input: Tensor, dim: _int, index: Tensor, *, out: Optional[Tensor] = None) -> Tensor
input:输入的张量
dim:要选择的维度
index:维度中要选择的序列,一定是torch.long类型的
torch.mask_select()
def masked_select(input: Tensor, mask: Tensor, *, out: Optional[Tensor] = None) -> Tensor
mask:与输出同类型的布尔型张量
输出是一维张量
le:小于等于,ge:大于等于,gt:大于
三、张量变换
torch.reshape()
def reshape(input: Tensor, shape: Sequence[Union[_int, SymInt]]) -> Tensor
input:当张量在内存中连续时,输出与input共享内存
torch.transpose()
def transpose(input: Tensor, dim0: _int, dim1: _int) -> Tensor
dim0:要交换的张量
dim1:要交换的张量 将dim0和dim1进行交换
torch.t()
t(input: Tensor) -> Tensor
input:为二维张量,实现二维张量的转置
torch.squeeze()
def squeeze(input: Tensor) -> Tensor
当dim为None时,移除所有长度为1的维度
当dim为指定维度时,当且仅当指定维度为1时,指定维度可以被移除
torch.unsqueeze()
def unsqueeze(input: Tensor, dim: _int) -> Tensor
增加指定维度,增加的指定维度长度为1
四、数学运算
torch.add()
def add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number] = 1, out: Optional[Tensor] = None) -> Tensor
out = input_0(i) + alpha*input_1(i)
torch.addcdiv()
def addcdiv(self: Tensor, value: Union[Number, _complex], tensor1: Tensor, tensor2: Tensor) -> Tensor
out = input_0 + alpha*(input_1/input_2)