Pytorch学习

Pytorch基础学习地址:Neural Network Programming - Deep Learning with PyTorch

一、单个batch训练

1、准备数据:              train_set;
2、准备网络:              network=Nework();
3、以batch的形式加载数据: train_loader=torch.utils.data.DataLoader(train_set, batch_size=100)
4、获取batch:            batch=nextiter(train_loader);
5、获取图片和标签:        images, labels=batch;
6、预测:                 preds=network(images);
7、计算Loss:             loss = F.cross_entropy(preds, labels);
8、loss反向传播:         loss.backward();
   可以使用AMP进行FP32和FP16的混合运算,FP16减少内存的消耗,FP32减少误差
   用以下代买取代:loss.backward();
   from apex import amp
   model,optimizer = amp.initial(model,optimizer,opt_level="O1")   #注意是O,不是0
   with amp.scale_loss(loss,optimizer) as scaled_loss:
       scaled_loss.backward()
   O0:纯FP32训练,可作为accuracy的baseline;
  O1:混合精度训练(推荐使用),根据黑白名单自动决定使用FP16(GEMM,卷积)还是FP32(softmax)进行计算。
  O2:几乎FP16,混合精度训练,不存在黑白名单 ,除了bacthnorm,几乎都是用FP16计算;
  O3:纯FP16训练,很不稳定,但是可以作为speed的baseline;
9、设置optimizer:        optimizer = optim.Adam(network.parameters(), lr=0.01);
10、更新权重:            optimizer.step()11、输出loss:            print('loss1:', loss.item())
12、权重更新后再预测:    preds = network(images)
13、计算Loss:           loss = F.cross_entropy(preds, labels)
14、输出Loss:           print('loss2:', loss.item())

Output:
loss1: 2.3034827709198
loss2: 2.2825052738189697

这里只更新了一次权重

train_set| train_set = torchvision.datasets.FashionMNIST(root='./data',train=True,download=True,transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize(mean, std or mean)]))
train_loader | train_loader = torch.utils.data.DataLoader(train_set,batch_size=1000,shuffle=True))
class Network(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)

        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
        self.fc2 = nn.Linear(in_features=120, out_features=60)
        self.out = nn.Linear(in_features=60, out_features=10)

    def forward(self, t):
       # (1) input layer
    t = t

    # (2) hidden conv layer
    t = self.conv1(t)
    t = F.relu(t)
    t = F.max_pool2d(t, kernel_size=2, stride=2)

    # (3) hidden conv layer
    t = self.conv2(t)
    t = F.relu(t)
    t = F.max_pool2d(t, kernel_size=2, stride=2)

    # (4) hidden linear layer
    t = t.reshape(-1, 12 * 4 * 4)
    t = self.fc1(t)
    t = F.relu(t)

    # (5) hidden linear layer
    t = self.fc2(t)
    t = F.relu(t)

    # (6) output layer
    t = self.out(t)
    #t = F.softmax(t, dim=1)
        return t

二、所有的batch训练(单个周期)

在一次训练周期后再输出总的loss和正确预测的数量。如果len(train_set)=60000,batchsize=100,则权重更新60000/100=600次,也就是迭代600

for batch in train_loader 和 batch=nextiter(train_loader))是类似的
network = Network()

train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)
optimizer = optim.Adam(network.parameters(), lr=0.01)

total_loss = 0
total_correct = 0

for batch in train_loader: # Get Batch
    images, labels = batch 

    preds = network(images) # Pass Batch
    loss = F.cross_entropy(preds, labels) # Calculate Loss

    optimizer.zero_grad() #每次迭代都重新计算梯度
    loss.backward() # Calculate Gradients
    optimizer.step() # Update Weights

    total_loss += loss.item()
    total_correct += get_num_correct(preds, labels)#获取预测正确的数量

print(
    "epoch:", 0, 
    "total_correct:", total_correct, 
    "loss:", total_loss
)

Output:
epoch: 0 total_correct: 42104 loss: 476.6809593439102

loss = F.cross_entropy(preds, labels)# Calculate Loss,计算的是平均损失,默认情况下采用mean方式,因此,有时候在计算总损失的时候会采用total_loss += loss.item() * batch_size的方式,对于batchsize不能整除的情况total_loss += loss.item() * images.shape[0]更为准确。

三、多个周期训练

network = Network()

train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)
optimizer = optim.Adam(network.parameters(), lr=0.01)

for epoch in range(10):

    total_loss = 0
    total_correct = 0

    for batch in train_loader: # Get Batch
        images, labels = batch 

        preds = network(images) # Pass Batch
        loss = F.cross_entropy(preds, labels) # Calculate Loss

        optimizer.zero_grad()
        loss.backward() # Calculate Gradients
        optimizer.step() # Update Weights

        total_loss += loss.item()
        total_correct += get_num_correct(preds, labels)

    print(
        "epoch", epoch, 
        "total_correct:", total_correct, 
        "loss:", total_loss
    )


Output:
epoch 0 total_correct: 43301 loss: 447.59147948026657
epoch 1 total_correct: 49565 loss: 284.43429669737816
epoch 2 total_correct: 51063 loss: 244.08825492858887
epoch 3 total_correct: 51955 loss: 220.5841210782528
epoch 4 total_correct: 52551 loss: 204.73878084123135
epoch 5 total_correct: 52914 loss: 193.1240530461073
epoch 6 total_correct: 53195 loss: 184.50964668393135
epoch 7 total_correct: 53445 loss: 177.78808392584324
epoch 8 total_correct: 53629 loss: 171.81662507355213
epoch 9 total_correct: 53819 loss: 166.2412590533495

四、测试中的小技巧

获取多个batch的预测结果,并cat一起。

@torch.no_grad()
def get_all_preds(model, loader):
    all_preds = torch.tensor([])
    for batch in loader:
        images, labels = batch

        preds = model(images)
        all_preds = torch.cat(
            (all_preds, preds)
            ,dim=0
        )
    return all_preds

@torch.no_grad()用于忽略梯度跟踪,因为并不需要梯度跟踪,从而减少资源消耗
在训练时也可以设置部分不计算梯度。

预测时不计算梯度
with torch.no_grad():
    prediction_loader = torch.utils.data.DataLoader(train_set, batch_size=10000)
    train_preds = get_all_preds(network, prediction_loader)

五、在测试时,可利用二维矩阵查看,每个样本被正确或错误预测的某个类别的次数

真实与预测进行堆栈
stacked = torch.stack(
    (
        train_set.targets#GT
        ,train_preds.argmax(dim=1)#预测结果
    )
    ,dim=1
)

stacked
tensor([
    [9, 9],
    [0, 0],
    [0, 0],
    ...,
    [3, 3],
    [0, 0],
    [5, 5]
])

cmt = torch.zeros(10,10, dtype=torch.int64)#10个类别,真实标签和预测标签各10

for p in stacked:
    tl, pl = p.tolist()#类别tl被预测成了类别tl
    cmt[tl, pl] = cmt[tl, pl] + 1#统计
    
cmt
tensor([
    [5637,    3,   96,   75,   20,   10,   86,    0,   73,    0],
    [  40, 5843,    3,   75,   16,    8,    5,    0,   10,    0],
    [  87,    4, 4500,   70, 1069,    8,  156,    0,  106,    0],
    [ 339,   61,   19, 5269,  203,   10,   72,    2,   25,    0],
    [  23,    9,  263,  209, 5217,    2,  238,    0,   39,    0],
    [   0,    0,    0,    1,    0, 5604,    0,  333,   13,   49],
    [1827,    7,  716,  104,  792,    3, 2370,    0,  181,    0],
    [   0,    0,    0,    0,    0,   22,    0, 5867,    4,  107],
    [  32,    1,   13,   15,   19,    5,   17,   11, 5887,    0],
    [   0,    0,    0,    0,    0,   28,    0,  234,    6, 5732]
])

要绘制如下的图,需要:

import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from resources.plotcm import plot_confusion_matrix #在下面定义
cm = confusion_matrix(train_set.targets, train_preds.argmax(dim=1))#生成混合矩阵
print(type(cm))
cm

<class 'numpy.ndarray'>
Out[74]:
array([[5431,   14,   88,  145,   26,    7,  241,    0,   48,    0],
        [   4, 5896,    6,   75,    8,    0,    8,    0,    3,    0],
        [  92,    6, 5002,   76,  565,    1,  232,    1,   25,    0],
        [ 191,   49,   23, 5504,  162,    1,   61,    0,    7,    2],
        [  15,   12,  267,  213, 5305,    1,  168,    0,   19,    0],
        [   0,    0,    0,    0,    0, 5847,    0,  112,    3,   38],
        [1159,   16,  523,  189,  676,    0, 3396,    0,   41,    0],
        [   0,    0,    0,    0,    0,   99,    0, 5540,    0,  361],
        [  28,    6,   29,   15,   32,   23,   26,   14, 5827,    0],
        [   0,    0,    0,    0,    1,   61,    0,  107,    1, 5830]],
        dtype=int64)

用于绘制下图:

plotcm.py:

import itertools
import numpy as np
import matplotlib.pyplot as plt

def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

    print(cm)
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    fmt = '.2f' if normalize else 'd'
    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')
from plotcm import plot_confusion_matrix
plt.figure(figsize=(10,10))
plot_confusion_matrix(cm, train_set.classes)

在这里插入图片描述

六、Tensorboard

tensorboar的用处

  1. Tracking and visualizing metrics such as loss and accuracy
  2. Visualizing the model graph (ops and layers) Viewing histograms of
  3. weights, biases, or other tensors as they change over time
  4. Projecting embeddings to a lower dimensional space
  5. Displaying images, text, and audio data
  6. Profiling TensorFlow programs

调用、查看版本、安装:

from torch.utils.tensorboard import SummaryWriter
tensorboard --version
pip install tensorboard
or 
conda install tensorboard

使用

tb = SummaryWriter()#创建

network = Network()
images, labels = next(iter(train_loader))
grid = torchvision.utils.make_grid(images)
tensorboard --logdir=runs
http://localhost:6006#TensorBoard UI by browsing to

tb.add_image('images', grid)#将图片导入
tb.add_graph(network, images)#批量图片和网络结构都导入用于查看
tb.close()

tb.add_scalar('Loss', total_loss, epoch)
tb.add_scalar('Number Correct', total_correct, epoch)
tb.add_scalar('Accuracy', total_correct / len(train_set), epoch)

tb.add_histogram('conv1.bias', network.conv1.bias, epoch)
tb.add_histogram('conv1.weight', network.conv1.weight, epoch)
tb.add_histogram('conv1.weight.grad', network.conv1.weight.grad, epoch)

一个例子:
    total_loss = 0
    total_correct = 0

    for batch in train_loader: # Get Batch

        # Pass Batch
        # Calculate Loss
        # Calculate Gradient
        # Update Weights

    tb.add_scalar('Loss', total_loss, epoch)
    tb.add_scalar('Number Correct', total_correct, epoch)
    tb.add_scalar('Accuracy', total_correct / len(train_set), epoch)

    tb.add_histogram('conv1.bias', network.conv1.bias, epoch)
    tb.add_histogram('conv1.weight', network.conv1.weight, epoch)
    tb.add_histogram(
        'conv1.weight.grad'
        ,network.conv1.weight.grad
        ,epoch
    )
or
#nn.Module中的named_parameters()提供了网络的参数名字和值
for name, weight in network.named_parameters():
    tb.add_histogram(name, weight, epoch)
    tb.add_histogram(f'{name}.grad', weight.grad, epoch)

    print(
        "epoch", epoch, 
        "total_correct:", total_correct, 
        "loss:", total_loss
    )

tb.close()

七、动态批次和学习率

batch_size_list = [100, 1000, 10000]
lr_list = [.01, .001, .0001, .00001]
for batch_size in batch_size_list:
    for lr in lr_list:
        network = Network()

        train_loader = torch.utils.data.DataLoader(
            train_set, batch_size=batch_size
        )
        optimizer = optim.Adam(
            network.parameters(), lr=lr
        )

        images, labels = next(iter(train_loader))
        grid = torchvision.utils.make_grid(images)

        comment=f' batch_size={batch_size} lr={lr}'
        tb = SummaryWriter(comment=comment)
        tb.add_image('images', grid)
        tb.add_graph(network, images)

        for epoch in range(5):
            total_loss = 0
            total_correct = 0
            for batch in train_loader:
                images, labels = batch # Get Batch
                preds = network(images) # Pass Batch
                loss = F.cross_entropy(preds, labels) # Calculate Loss
                optimizer.zero_grad() # Zero Gradients
                loss.backward() # Calculate Gradients
                optimizer.step() # Update Weights

                total_loss += loss.item() * batch_size
                total_correct += get_num_correct(preds, labels)

            tb.add_scalar(
                'Loss', total_loss, epoch
            )
            tb.add_scalar(
                'Number Correct', total_correct, epoch
            )
            tb.add_scalar(
                'Accuracy', total_correct / len(train_set), epoch
            )

            for name, param in network.named_parameters():
                tb.add_histogram(name, param, epoch)
                tb.add_histogram(f'{name}.grad', param.grad, epoch)

            print(
                "epoch", epoch
                ,"total_correct:", total_correct
                ,"loss:", total_loss
            )  
        tb.close()

如果batchsize不能出现整除的情况,即最后一批中样本数量更少,可以设置 drop_last=True,就是不训练最后一批,默认是False.

八、多个for循环不嵌套

for
    for
        for
            代码

这种很麻烦,尝试以下这种:
from itertools import product
Init signature: product(*args, **kwargs)
Docstring:     
"""
product(*iterables, repeat=1) --> product object
Cartesian product of input iterables.  Equivalent to nested for-loops.
"""

parameters = dict(
    lr = [.01, .001]
    ,batch_size = [100, 1000]
    ,shuffle = [True, False]
)
for lr, batch_size, shuffle in product(*param_values): # ‘*’ 符号是将列表压缩为一组参数的方式
    print (lr, batch_size, shuffle)
        comment = f' batch_size={batch_size} lr={lr} shuffle={shuffle}'

    train_loader = torch.utils.data.DataLoader(
        train_set
        ,batch_size=batch_size
        ,shuffle=shuffle 
    )

    optimizer = optim.Adam(
        network.parameters(), lr=lr
    )

    # Rest of training process given the set of parameters

0.01 100 True
0.01 100 False
0.01 1000 True
0.01 1000 False
0.001 100 True
0.001 100 False
0.001 1000 True
0.001 1000 False

九、sequential models

序列化模块

1:
network1 = nn.Sequential(
    nn.Flatten(start_dim=1)
    ,nn.Linear(in_features, out_features)
    ,nn.Linear(out_features, out_classes)
)

2:
layers = OrderedDict([
    ('flat', nn.Flatten(start_dim=1))
   ,('hidden', nn.Linear(in_features, out_features))
   ,('output', nn.Linear(out_features, out_classes))
])
network2 = nn.Sequential(layers)

3:
network3 = nn.Sequential()
network3.add_module('flat', nn.Flatten(start_dim=1))
network3.add_module('hidden', nn.Linear(in_features, out_features))
network3.add_module('output', nn.Linear(out_features, out_classes))

示例:
class Network(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 12, 5)

        self.fc1 = nn.Linear(in_features=12*4*4, out_features=120)
        self.fc2 = nn.Linear(in_features=120, out_features=60)
        self.out = nn.Linear(in_features=60, out_features=10)

    def forward(self, t):

        t = F.relu(self.conv1(t))
        t = F.max_pool2d(t, kernel_size=2, stride=2)

        t = F.relu(self.conv2(t))
        t = F.max_pool2d(t, kernel_size=2, stride=2)

        t = t.flatten(start_dim=1)
        t = F.relu(self.fc1(t))
        t = F.relu(self.fc2(t))
        t = self.out(t)

        return t
network = Network()
同:
sequential = nn.Sequential(
      nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
    , nn.ReLU()
    , nn.MaxPool2d(kernel_size=2, stride=2)
    , nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
    , nn.ReLU()
    , nn.MaxPool2d(kernel_size=2, stride=2)
    , nn.Flatten(start_dim=1)  
    , nn.Linear(in_features=12*4*4, out_features=120)
    , nn.ReLU()
    , nn.Linear(in_features=120, out_features=60)
    , nn.ReLU()
    , nn.Linear(in_features=60, out_features=10)
)

两种网络固定随机种子torch.manual_seed(50)可是使得预测结果一样。

十、RunBuilder

这个方法将根据我们传入的参数为我们获得它构建的运行。

from collections import OrderedDict
from collections import namedtuple
from itertools import product

class RunBuilder():
    @staticmethod
    def get_runs(params):

        Run = namedtuple('Run', params.keys())

        runs = []
        for v in product(*params.values()):
            runs.append(Run(*v))

        return runs
params = OrderedDict(
    lr = [.01, .001]
    ,batch_size = [1000, 10000]
)
> runs = RunBuilder.get_runs(params)
> runs

[
    Run(lr=0.01, batch_size=1000),
    Run(lr=0.01, batch_size=10000),
    Run(lr=0.001, batch_size=1000),
    Run(lr=0.001, batch_size=10000)
]

十一、RunManager

所有这些工作都有帮助,但我们的训练循环现在相当拥挤。在这一节中,我们将清理我们的训练循环,并通过使用我们上次构建的RunBuilder类和构建一个名为RunManager的新类为更多的实验奠定基础。能够在顶部添加参数和值,并在多次训练运行期间测试或尝试所有值。其实就是类似于参数设置

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms

from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from IPython.display import display, clear_output
import pandas as pd
import time
import json

from itertools import product
from collections import namedtuple
from collections import OrderedDict

class RunManager():
    def __init__(self):

        self.epoch_count = 0
        self.epoch_loss = 0
        self.epoch_num_correct = 0
        self.epoch_start_time = None

        self.run_params = None #这是运行参数的运行定义。它的值将是由RunBuilder类返回的一个运行。
        self.run_count = 0 #运行次数
        self.run_data = [] # 一个列表,用于跟踪每次运行的参数值和每个epoch的结果
        self.run_start_time = None

        self.network = None
        self.loader = None
        self.tb = None

self.epoch_count等中的epoch删掉变为self.count,因为好几个变量都有epoch,他们可是设置为一个epoch类进行统一的管理。从而使得代码变得更为简单。同时定义一些相关函数。
调用:

class Epoch():
def __init__(self):
        self.count = 0
        self.loss = 0
        self.num_correct = 0
        self.start_time = None 
def begin_run(self, run, network, loader):

    self.run_start_time = time.time()

    self.run_params = run
    self.run_count += 1

    self.network = network
    self.loader = loader
    self.tb = SummaryWriter(comment=f'-{run}')

    images, labels = next(iter(self.loader))
    grid = torchvision.utils.make_grid(images)

    self.tb.add_image('images', grid)
    self.tb.add_graph(self.network, images)
def end_run(self):
    self.tb.close()
    self.epoch_count = 0
def begin_epoch(self):
    self.epoch_start_time = time.time()

    self.epoch_count += 1
    self.epoch_loss = 0
    self.epoch_num_correct = 0
def end_epoch(self):

    epoch_duration = time.time() - self.epoch_start_time
    run_duration = time.time() - self.run_start_time

    loss = self.epoch_loss / len(self.loader.dataset)
    accuracy = self.epoch_num_correct / len(self.loader.dataset)

    self.tb.add_scalar('Loss', loss, self.epoch_count)
    self.tb.add_scalar('Accuracy', accuracy, self.epoch_count)

    for name, param in self.network.named_parameters():
        self.tb.add_histogram(name, param, self.epoch_count)
        self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count)
def end_epoch(self):
    ...

    results = OrderedDict()
    results["run"] = self.run_count
    results["epoch"] = self.epoch_count
    results['loss'] = loss
    results["accuracy"] = accuracy
    results['epoch duration'] = epoch_duration
    results['run duration'] = run_duration
    for k,v in self.run_params._asdict().items(): results[k] = v
    self.run_data.append(results)

    df = pd.DataFrame.from_dict(self.run_data, orient='columns')
def track_loss(self, loss, batch):
    self.epoch_loss += loss.item() * batch[0].shape[0]

def track_num_correct(self, preds, labels):
    self.epoch_num_correct += self.get_num_correct(preds, labels)
def _get_num_correct(self, preds, labels):
    return preds.argmax(dim=1).eq(labels).sum().item()
def save(self, fileName):
    pd.DataFrame.from_dict(
        self.run_data, orient='columns'
    ).to_csv(f'{fileName}.csv')

    with open(f'{fileName}.json', 'w', encoding='utf-8') as f:
        json.dump(self.run_data, f, ensure_ascii=False, indent=4)
#-----------------------------------------------
m = RunManager()
params = OrderedDict(
      lr = [.01]
    , batch_size = [1000]
    , num_workers = [1]
    , device = ['cuda']
    , trainset = ['not_normal', 'normal']
)

for run in RunBuilder.get_runs(params):

    device = torch.device(run.device)
    network = Network().to(device)
    loader = DataLoader(
          trainsets[run.trainset]
        , batch_size=run.batch_size
        , num_workers=run.num_workers
    )
    optimizer = optim.Adam(network.parameters(), lr=run.lr)

    m.begin_run(run, network, loader)
    for epoch in range(20):
        m.begin_epoch()
        for batch in loader:

            images = batch[0].to(device)
            labels = batch[1].to(device)
            preds = network(images) # Pass Batch
            loss = F.cross_entropy(preds, labels) # Calculate Loss
            optimizer.zero_grad() # Zero Gradients
            loss.backward() # Calculate Gradients
            optimizer.step() # Update Weights

            m.track_loss(loss, batch)
            m.track_num_correct(preds, labels)
        m.end_epoch()
    m.end_run()
m.save('results')

torch中张量的一些用法

函数解释
torch.tensor()torch.tensor()是常用的张量构建方法,有三个属性:dtype()确定是什么类型,device()确定在cpu还是gpu中传递,layout();包含轴aix,秩rank,形状shape。CNN中张量为:[ batchsize x channel x H x W ].
t.reshape(m,n)将二维张量转化为m*n形状的张量,如果m或者n为-1,表示根据元素个数自动确定值。
torch.cat()torch.cat((t1, t2,),dim=0,1)其中0表示按行row叠加,1表示按列column叠加,不会增加维度。
t.flatten()t.flatten()为展平,利用了squeez方法,压缩suqeez会减少一个维度展平,解压unsqueez会正将一个维度。t.flatten(start_dim=1)可以对特定的轴进行展平,其它轴不变
t.eq()t.eq()像素级的等号运算
torch.tacktorch.tack进行堆栈,新增一个轴来。而cat不会,cat默认从0轴开始堆栈
加减乘除add()sub()mul()dic()都是element-wise operation
比较torch中的比较也是逐元素的,broadcasting在进行加减乘除时会对形状不一的目标进行变形,使得形状相同。
t.numel()t.numel()统计元素个数。torch.sum(a, dim=(0, 1), keepdim=True)可以设置哪个轴加,是否改变维度。
t.max()与t.argmax()在没有说明维度的时候,t.max()找出最大值,t.argmax()flatten之后,最大值的位置
torch中的数据的类型转换t.item()可以将张量转化为数字。t.tolist()t.numpy()可将将张量转化为,python的list和numpy的array
某点torch中,一定是沿着轴的方向来的;nn.Xxxnn.functional.xxx的实际功能是相同的;nn.functional.xxx是函数接口,而nn.Xxxnn.functional.xxx的类封装,并且nn.Xxx都继承于一个共同祖先nn.Module。这一点导致nn.Xxx除了具有nn.functional.xxx功能之外,内部附带了nn.Module相关的属性和方法,例如train(), eval(),load_state_dict, state_dict 等。
yieldyield就是 return 返回一个值,并且记住这个返回的位置,下次迭代就从这个位置后开始。
collections.OrderedDictOrderedDict构建一个包含顺序的数组
collections.namedtuplenamedtuple构建一个元组,相当于构建一个简单的类Point = namedtuple('Point', ['x', 'y']),p = Point(11, y=22),p[0] + p[1] =33
torchvision.utils.make_gridmake_grid将若干张图片拼成一张
to(‘cuda or cpu’) or .cpu(), cuda()t.to('cuda or cpu')是首选,因为它是参数化,且更容易选择, t = t.cuda(),network = network.cuda()将数据和模型转到GPU上面去,两者也必须同时在上面。t.device device(type='cuda', index=0)。.cpu()同理。数据必须同时在cpu或者cuda上面才能计算,否则会报错
transforms.Normalize(mean, std or mean)])transforms.Normalize(mean, std or mean)])是否正则以及采用那种方式
img
tensor([
    [
        [1., 1.]
        ,[1., 1.]
    ]
    ,[
        [2., 2.]
        , [2., 2.]
    ],
    [
        [3., 3.]
        ,[3., 3.]
    ]
])
img.flatten(start_dim=0)
tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
img.flatten(start_dim=1)
tensor([
    [1., 1., 1., 1.],
    [2., 2., 2., 2.],
    [3., 3., 3., 3.]
])

> t1 = torch.tensor([1,1,1])
> t1.unsqueeze(dim=0)
tensor([[1, 1, 1]])
> t1.unsqueeze(dim=1)
tensor([[1],
        [1],
        [1]])
模块解释
torchThe top-level PyTorch package and tensor library
torch.nnA subpackage that contains modules and extensible classes for building neural networks.
torch.optimA subpackage that contains standard optimization operations like SGD and Adam.
torch.nn.functionalA functional interface that contains typical operations used for building neural networks like loss functions and convolutions.
torchvisionA package that provides access to popular datasets, model architectures, and image transformations for computer vision.
torchvision.transformsAn interface that contains common transforms for image processing.
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值