1、基礎語法
1、torch.empty(5,3) # 創建全零矩陣(值小但不爲零)
2、torch.rand(5,3) # 創建隨機值矩陣
rand初始化矩阵元素的值为一个均匀分布,而randn遵循标准正态分布。
3、torch.zeros(5,3) # 全零矩陣
4、torch.tensor([1.1, 2.2, 3.3]) # 直接轉化成矩陣
构造一个2行5列元素大小从1到10的矩阵: torch.tensor(range(10).reshape(2,5))
5、x= x.new_ones(5,3, dtype=torch.double)
6、x= torch.randn_like(x, dtype = torch.float)
6、torch.int(low=1, high=3, size=[3, 4]) # 元素值为1到3,形状为3行4列
7、矩陣維度size:x.size()
获取最后一个维度: x.size(-1)
7、获取阶数:x.dim()
7、获取最大值:x.max()
获取最小值:x.min()
7、转置:x.t()或x.transpose()
(1)二阶为转置,当超过二阶时,需要传入参数,如x.transpose(1,0)
(2)当需要三个都改变时,可用x.permute(),不传参数时,默认为x.permute(0,1,2)
8、矩陣加法:torch(a,b)或者a+b
9、索引:如x[:, 1] 表示第一列
获取1行3列的值: x[1,3]
对第1行第3列赋值:x[1, 3]=1
10、view改變矩陣維度
x = torch.randn(4,4)
y = x.view(16)
z = x.view(-1, 8)
11、转化成张量 x = torch.tensor([[1.8]])
张量,获取数值:y = x.item()
12、转化成数组: x.numpy()
13、指定数据类型:torch.tensor([1,2], dtype=tensor.int32);
获取数据类型:tensor.dtype
数据类型的转换:x.int() x.long()
14、使用GPU
device = torch.device('cuda:0' if torch.cuda.is_available else 'cpu')
zeroV = torch.zeros([3,5],device = device)
或者使用to
a.to(device)
手写实现线性回归:
import torch
learning_rate = 0.1
# y = 0.5x + 0.8
# 初始化输入输出
x = torch.rand([500, 1], dtype=torch.float)
y_true = x * 0.5 + 0.8
# 初始化权重w和偏置值b,并设置require_grad为true
w = torch.rand([1, 1], requires_grad=True)
b = torch.empty([1, 1], requires_grad=True)
# 迭代
for i in range(300):
pred = torch.matmul(x, w) + b
loss = (pred - y_true).pow(2).mean()
if w.grad is not None:
w.grad.data.zero_()
if w.grad is not None:
b.grad.data.zero_()
loss.backward()
w.data = w.data - learning_rate * w.grad
b.data = b.data - learning_rate * b.grad
if not i % 50:
print(f"w:{w.item()}, b:{b.item()}, loss:{loss}")
输出结果:
w:0.6309229135513306, b:0.5492251515388489, loss:0.06151945888996124
w:0.6085758805274963, b:0.7422322630882263, loss:0.0009876982076093554
w:0.5572196245193481, b:0.7695564031600952, loss:0.0002743132645264268
w:0.5301547050476074, b:0.7839562296867371, loss:7.618464587721974e-05
w:0.5158914923667908, b:0.7915449738502502, loss:2.1158641175134107e-05
w:0.5083746910095215, b:0.7955442667007446, loss:5.8762325352290645e-06