torch学习系列一

一.pythorch的张量类似numpy的nadarry,而且可以支持GPU优化。

a.其构造方法有一下一些:

x = torch.empty(5, 3)
x = torch.rand(5, 3)
x = torch.zeros(5, 3, dtype=torch.long)
x = torch.tensor([5.5, 3])
x = x.new_ones(5, 3, dtype=torch.double)
x = torch.randn_like(x, dtype=torch.float)
print(x.size())

b.其计算操作类似:

x + y ==> torch.add(x, y) ==> result = torch.empty(5, 3)  torch.add(x, y, out=result)
y.add_(x)  ==> y = y + x

Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.

x[:, 1] 索引
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
x.item()取出数值

转化为numpy:b = a.numpy()   浅复制,b随着a变化

相反:b = torch.from_numpy(a)

c.Tensor和cuda

if torch.cuda.is_available():
    device = torch.device("cuda")          # a CUDA device object
    y = torch.ones_like(x, device=device)  # directly create a tensor on GPU
    x = x.to(device)                       # or just use strings ``.to("cuda")``
    z = x + y
    print(z)
    print(z.to("cpu", torch.double))       # ``.to`` can also change dtype together!

d.背后的自动微分/计算梯度

对一个张量set its attribute .requires_grad as True,eg:

x = torch.ones(2, 2, requires_grad=True)

而detach方法可以截断自动微分机制,eg:

  1. # y=A(x), z=B(y) 求B中参数的梯度,不求A中参数的梯度

  2. # 第一种方法

  3. y = A(x)

  4. z = B(y.detach())

  5. z.backward()

  6.  
  7. # 第二种方法

  8. y = A(x)

  9. y.detach_()

  10. z = B(y)

  11. z.backward()

Each tensor has a .grad_fn attribute that references a Function that has created the Tensor,eg:

y = x + 2  print(y)  tensor([[3., 3.],[3., 3.]], grad_fn=<AddBackward0>)

通过out.bacward()计算梯度 print(x.grad)可以查看对应的梯度矩阵

 

 

二.使用torch.nn package建立神经网络

对一个mnist经典手写网络:

import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 5x5 square convolution
        # kernel
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features
net = Net()
print(net)即可看到网络的概略信息

params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight这样可以看到网络的具体参数

模拟进行计算:

input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1, 10))

真实计算:

output = net(input)
target = torch.randn(10)  # a dummy target, for example
target = target.view(1, -1)  # make it the same shape as output
criterion = nn.MSELoss()

# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
optimizer.zero_grad()   # zero the gradient buffers
output = net(input)
loss = criterion(output, target)   #定义loss
loss.backward()
optimizer.step()    # Does the update   #loss回传

简单的GPU转化方法:

device = torch.device("cuda:0")
model.to(device)  #将模型转化为GPU模式
mytensor = my_tensor.to(device)  #将对应的模型输入等参与GPU计算的张量转化为GPU
model = nn.DataParallel(model)  #多GPU支持

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值