ubuntu18.04安装cuda10.1 cudnn7.6 pytorch

在自己的笔记本电脑上

ubuntu18.04系统 64位(不是虚拟机)

  • 首先安装好显卡驱动 ,输入以下命名查看显卡驱动是否安装成功
nvidia-smi
  •  在CUDA官网选择系统对应的版本下载CUDA,我选择下载最新版本的CUDA10.1

 显卡驱动已在上面安装了,安装过程中一定要选择不安装驱动

安装完后,在.bashrc文件末尾添加环境变量

sudo vim ~/.bashrc
export PATH=/usr/local/cuda-10.1/bin${PATH:+:$PATH}}   #注意,根据自己的cuda版本填写
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
source ~/.bashrc

 

 输入以下命令,测试是否安装成功

cd /usr/local/cuda/samples/1_Utilities/deviceQuery 
sudo make
./deviceQuery

 如下图 Result = PASS表示CUDA安装成功。

  •  在官网下载cuDNN,选择自己对应的系统和已安装的CUDA版本下载cuDNN

下载三个文件,用dpkg -i依次安装

libcudnn7_7.6.4.38-1+cuda10.1_amd64.deb
libcudnn7-devel_7.6.4.38-1+cuda10.1_amd64.deb
libcudnn7-doc_7.6.4.38-1+cuda10.1_amd64.deb

 检查cuDNN安装情况

cp -r /usr/src/cudnn_samples_v7/ $HOME
cd  $HOME/cudnn_samples_v7/mnistCUDNN
make clean && make
./mnistCUDNN

 最终看到输出 Test passed! 表示一切安装顺利!

  • 安装pytorch-gpu

详见官网https://pytorch.org/get-started/locally/

pytorch forward:

import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import torch

class Net(nn.Module):  # 需要继承这个类
    def __init__(self):
        super(Net, self).__init__()
        # 建立了两个卷积层,self.conv1, self.conv2,注意,这些层都是不包含激活函数的
        self.conv1 = nn.Conv2d(1, 6, 5)  # 1 input image channel, 6 output channels, 5x5 square convolution kernel
        self.conv2 = nn.Conv2d(6, 16, 5)
        # 三个全连接层
        self.fc1 = nn.Linear(16 * 5 * 5, 120)  # an affine operation: y = Wx + b
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):  # 注意,2D卷积层的输入data维数是 batchsize*channel*height*width
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))  # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)  # If the size is a square you can only specify a single number
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)

        print(x)
        print('y=--------')
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features


net = Net()
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr = 0.01)
num_iteations = 20
input = Variable(torch.randn(2, 1, 32, 32))
print('input=',input)
#target = Variable(torch.Tensor([5],dtype=torch.long))
target = Variable(torch.LongTensor([5,7]))
# in your training loop:
for i in range(num_iteations):
    optimizer.zero_grad() # zero the gradient buffers,如果不归0的话,gradients会累加

    output = net(input) # 这里就体现出来动态建图了,你还可以传入其他的参数来改变网络的结构
    criterion = nn.CrossEntropyLoss()
    loss = criterion(output, target)
    loss.backward() # 得到grad,i.e.给Variable.grad赋值
    optimizer.step() # Does the update,i.e. Variable.data -= learning_rate*Variable.grad

 

 

参考:

[1] https://blog.csdn.net/AlphaWun/article/details/90180338

[2] https://blog.csdn.net/CS_GaoMing/article/details/94776652

[3] https://blog.csdn.net/wangjiankun_ls/article/details/93238352

[4] 简单了解pytorch的forward

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值