pytorch初探

一、环境准备

先准备安装Anaconda,环境用的3.7版本的。创建pytorch环境之后启用pytorch环境

conda create -n pytorch python=3.7
conda activate pytorch

然后看下用哪些包:

conda activate pytorch

下一步,安装pytorch。

登录官网,https://pytorch.org/,配置界面:

这里需要查看自己gpu的型号,通过任务管理器就可以看到了:

 输入这段代码看下显卡的版本号:

nvidia-smi

下面会有生成的一段命令:

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia

如果想用回原来的源:

conda config --remove-key channels

不得不说装个cuda还挺麻烦,报错,算了先用cpu版,技术娴熟了在用cuda版:

conda install pytorch torchvision torchaudio cpuonly -c pytorch

查看安装包情况(如果看单个包版本 pip show ** 就行):

pip list

如果安装包比较慢,也有个办法,就是直接放到pkg里面,我是安装到D盘的:

D:\Program Files\Anaconda3\pkgs

还在下在gpu的包,没有gpu的时候,测试这段代码就会为false:

(pytorch) C:***>python
Python 3.9.16 (main, Jan 11 2023, 16:16:36) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False

对于pycharn,设置和anaconda一样的环境,有个办法,用env/pytorch下面的python就可以保持环境的一致性。

这里也补充下安装gpu版本的pytorch,把版本号删了,随他下!

conda install pytorch torchvision torchaudio -c pytorch -c nvidia

然并卵,还是没成功,后续边研究试试。

然后是在pytorch环境中安装jupyter notebook:

(pytorch) C:\Users\zz>conda install nb_conda

安装完之后便可以启动jupyter notebook

jupyter notebook

补充一点openai的环境,毕竟吴恩达也开课了,先整个open ai的包:

conda install openai

经过试验,下载了pytorch的安装包之后来安装反而更为方便;核心就是torch、torchvision这两个包,这两个包比较重要,通过pip安装有了之后功能就满足了。

二、神经网络入门

采用minst数据集进行练手还不错,先下载数据:

from pathlib import Path
import requests

DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"

PATH.mkdir(parents=True, exist_ok=True)

URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"

if not (PATH / FILENAME).exists():
        content = requests.get(URL + FILENAME).content
        (PATH / FILENAME).open("wb").write(content)

进行数据读取:

import pickle
import gzip

with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
        ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")

查看一下样例:

from matplotlib import pyplot
import numpy as np

pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)

注意数据需转换成tensor才能参与后续建模训练:

import torch

x_train, y_train, x_valid, y_valid = map(
    torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())

torch.nn.functional 的层和函数应用:

from torch import nn

class Mnist_NN(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden1 = nn.Linear(784, 128)
        self.hidden2 = nn.Linear(128, 256)
        self.out  = nn.Linear(256, 10)

    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = F.relu(self.hidden2(x))
        x = self.out(x)
        return x
        

这里可以查看下模型的参数:

net = Mnist_NN()
print(net)
print('-'*8)
for name, parameter in net.named_parameters():
    print(name, parameter,parameter.size())

使用TensorDataset和DataLoader来简化:

from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader

train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)

valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
def get_data(train_ds, valid_ds, bs):
    return (
        DataLoader(train_ds, batch_size=bs, shuffle=True),
        DataLoader(valid_ds, batch_size=bs * 2),
    )
import numpy as np

def fit(steps, model, loss_func, opt, train_dl, valid_dl):
    for step in range(steps):
        model.train()
        for xb, yb in train_dl:
            loss_batch(model, loss_func, xb, yb, opt)

        model.eval()
        with torch.no_grad():
            losses, nums = zip(
                *[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
            )
        val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
        print('当前step:'+str(step), '验证集损失:'+str(val_loss))

这里设置学习率、优化器,之前看吴恩达深度学习,这几步用np撸代码也是麻烦,pytorch方便多了:

from torch import optim
def get_model():
    model = Mnist_NN()
    return model, optim.SGD(model.parameters(), lr=0.001)

定义损失函数:

def loss_batch(model, loss_func, xb, yb, opt=None):
    loss = loss_func(model(xb), yb)

    if opt is not None:
        loss.backward()
        opt.step()
        opt.zero_grad()

    return loss.item(), len(xb)

三行训练代码:

train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(25, model, loss_func, opt, train_dl, valid_dl)

预测就用 model(输入)就行,是不是很方便!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值