Colab 使用 PyTorch-TPU

·在Colab上安装PyTorch/XLA
·在TPU上运行基本的PyTorch函数,比如创建和添加张量。
·在TPU上运行PyTorch模块和autograd。
·在TPU上运行PyTorch网络。 在Colab TPU上使用张量、运行模块和运行整个网络非常简单

运行的前提是:
首先在主菜单上,单击Runtime并选择Change Runtime type。设置“TPU”为硬件加速器。下面的单元格确保您能够访问Colab上的TPU。
选好TPU

import os
assert os.environ['COLAB_TPU_ADDR'] #检测TPU

1.安装PyTorch / XLA

!pip install cloud-tpu-client==0.10 torch==1.11.0 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl

2.如果你在Colab上使用GPU,运行下面的注释代码来安装GPU兼容的PyTorch wheel和依赖

!pip install cloud-tpu-client==0.10 torch==1.11.0 https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl --force-reinstall

3.在tpu上创建和操作张量

import torch
import torch_xla
import torch_xla.core.xla_model as xm

dev = xm.xla_device()
t1 = torch.ones(3, 3, device = dev)
print(t1)

有关所有PyTorch/XLA函数的描述,请参阅http://pytorch.org/xla/的文档

a = torch.randn(2, 2, device = dev)
b = torch.randn(2, 2, device = dev)
print(a + b)
print(b * 2)
print(torch.matmul(a, b))

在TPU核上运行一维卷积:

filters = torch.randn(33, 16, 3, device = dev)
inputs = torch.randn(20, 16, 50, device = dev)
torch.nn.functional.conv1d(inputs, filters)

4.在TPU运行modules 和 autograd

fc = torch.nn.Linear(5, 2, bias=True)
fc = fc.to(dev)
features = torch.randn(3, 5, device=dev, requires_grad=True)
output = fc(features)
print(output)
output.backward(torch.ones_like(output))
print(fc.weight.grad)

5.在TPU运行神经网络
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):

def __init__(self):
    super(Net, self).__init__()
    # 1 input image channel, 6 output channels, 3x3 square convolution
    # kernel
    self.conv1 = nn.Conv2d(1, 6, 3)
    self.conv2 = nn.Conv2d(6, 16, 3)
    # an affine operation: y = Wx + b
    self.fc1 = nn.Linear(16 * 6 * 6, 120)  # 6*6 from image dimension
    self.fc2 = nn.Linear(120, 84)
    self.fc3 = nn.Linear(84, 10)

def forward(self, x):
    # Max pooling over a (2, 2) window
    x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
    # If the size is a square you can only specify a single number
    x = F.max_pool2d(F.relu(self.conv2(x)), 2)
    x = x.view(-1, self.num_flat_features(x))
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

def num_flat_features(self, x):
    size = x.size()[1:]  # all dimensions except the batch dimension
    num_features = 1
    for s in size:
        num_features *= s
    return num_features
net = Net().to(dev)
input = torch.randn(1, 1, 32, 32, device=dev)
out = net(input)
print(out)
  • 7
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值