torch并行训练笔记

参考

草率地将当前深度学习的大规模分布式训练技术分为如下三类:

Data Parallelism (数据并行)
Naive:每个worker存储一份model和optimizer,每轮迭代时,将样本分为若干份分发给各个worker,实现并行计算
ZeRO: Zero Redundancy Optimizer,微软提出的数据并行内存优化技术,核心思想是保持Naive数据并行通信效率的同时,尽可能降低内存占用
Model/Pipeline Parallelism (模型并行)
Naive: 纵向切割模型,将不同的layers放到不同的device上,按顺序进行正/反向传播
GPipe:小批量流水线方式的纵向切割模型并行
Megatron-LM:Tensor-slicing方式的模型并行加速
Non-parallelism approach (非并行技术)
Gradient Accumulation: 通过梯度累加的方式解决显存不足的问题,常用于模型较大,单卡只能塞下很小的batch的并行训练中
CPU Offload: 同时利用 CPU 和 GPU 内存来训练大型模型,即存在GPU-CPU-GPU的 transfers操作
etc.:还有很多不一一罗列(如Checkpointing, Memory Efficient Optimizer等)

强推 一下 DeepSpeed,微软在2020年开源的一个对PyTorch的分布式训练进行优化的库,让训练百亿参数的巨大模型成为可能,其提供的 3D-parallelism (DP+PP+MP)的并行技术组合,能极大程度降低大模型训练的硬件条件以及提高训练的效率.
足够多的文章介绍 PS/Ring-AllReduce 和 PyTorch DP/DDP 的原理,给出具有代表性的几篇:
PYTORCH DISTRIBUTED OVERVIEW
PyTorch 源码解读之 DP & DDP
Bringing HPC Techniques to Deep Learning

单机单卡 [snsc.py]
单机多卡 (with DataParallel) [snmc_dp.py]
多机多卡 (with DistributedDataParallel)
torch.distributed.launch [mnmc_ddp_launch.py]
torch.multiprocessing [mnmc_ddp_mp.py]
Slurm Workload Manager [mnmc_ddp_slurm.py]
ImageNet training example [imagenet.py]

单机单卡就不赘述了.

单机多卡

其实就一行,但是可能显存分布不均匀,并且其实batch只是单卡的batch.

net = nn.DataParallel(net)

多机多卡

进程组的相关概念
GROUP:进程组,大部分情况下DDP的各个进程是在同一个进程组下
WORLD_SIZE:总的进程数量 (原则上一个process占用一个GPU是较优的)
RANK:当前进程的序号,用于进程间通讯,rank = 0 的主机为 master 节点
LOCAL_RANK:当前进程对应的GPU号

在这里插入图片描述
DDP的基本用法 (代码编写流程)
使用 torch.distributed.init_process_group 初始化进程组
使用 torch.nn.parallel.DistributedDataParallel 创建 分布式模型
使用 torch.utils.data.distributed.DistributedSampler 创建 DataLoader
调整其他必要的地方(tensor放到指定device上,S/L checkpoint,指标计算等)
使用 torch.distributed.launch / torch.multiprocessing 或 slurm 开始训练

集体通信的使用
将各卡的信息进行汇总,分发或平均等操作,需要使用集体通讯操作(如算accuracy或者总loss时候需要用到allreduce),可参考:
torch.distributed
NCCL-Woolley
scaled_all_reduce

不同启动方式的用法
torch.distributed.launch: mnmc_ddp_launch.py
torch.multiprocessing: mnmc_ddp_mp.py
Slurm Workload Manager: mnmc_ddp_slurm.py

"""
(MNMC) Multiple Nodes Multi-GPU Cards Training
    with DistributedDataParallel and torch.distributed.launch
Try to compare with [snsc.py, snmc_dp.py & mnmc_ddp_mp.py] and find out the differences.
"""

import os

import torch
import torch.distributed as dist
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torch.nn.parallel import DistributedDataParallel as DDP

BATCH_SIZE = 256
EPOCHS = 5


if __name__ == "__main__":

    # 0. set up distributed device
    rank = int(os.environ["RANK"])
    local_rank = int(os.environ["LOCAL_RANK"])
    torch.cuda.set_device(rank % torch.cuda.device_count())
    dist.init_process_group(backend="nccl")
    device = torch.device("cuda", local_rank)

    print(f"[init] == local rank: {local_rank}, global rank: {rank} ==")

    # 1. define network
    net = torchvision.models.resnet18(pretrained=False, num_classes=10)
    net = net.to(device)
    # DistributedDataParallel
    net = DDP(net, device_ids=[local_rank], output_device=local_rank)

    # 2. define dataloader
    trainset = torchvision.datasets.CIFAR10(
        root="./data",
        train=True,
        download=False,
        transform=transforms.Compose(
            [
                transforms.RandomCrop(32, padding=4),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                transforms.Normalize(
                    (0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)
                ),
            ]
        ),
    )
    # DistributedSampler
    # we test single Machine with 2 GPUs
    # so the [batch size] for each process is 256 / 2 = 128
    train_sampler = torch.utils.data.distributed.DistributedSampler(
        trainset,
        shuffle=True,
    )
    train_loader = torch.utils.data.DataLoader(
        trainset,
        batch_size=BATCH_SIZE,
        num_workers=4,
        pin_memory=True,
        sampler=train_sampler,
    )

    # 3. define loss and optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(
        net.parameters(),
        lr=0.01 * 2,
        momentum=0.9,
        weight_decay=0.0001,
        nesterov=True,
    )

    if rank == 0:
        print("            =======  Training  ======= \n")

    # 4. start to train
    net.train()
    for ep in range(1, EPOCHS + 1):
        train_loss = correct = total = 0
        # set sampler
        train_loader.sampler.set_epoch(ep)

        for idx, (inputs, targets) in enumerate(train_loader):
            inputs, targets = inputs.to(device), targets.to(device)
            outputs = net(inputs)

            loss = criterion(outputs, targets)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            train_loss += loss.item()
            total += targets.size(0)
            correct += torch.eq(outputs.argmax(dim=1), targets).sum().item()

            if rank == 0 and ((idx + 1) % 25 == 0 or (idx + 1) == len(train_loader)):
                print(
                    "   == step: [{:3}/{}] [{}/{}] | loss: {:.3f} | acc: {:6.3f}%".format(
                        idx + 1,
                        len(train_loader),
                        ep,
                        EPOCHS,
                        train_loss / (idx + 1),
                        100.0 * correct / total,
                    )
                )
    if rank == 0:
        print("\n            =======  Training Finished  ======= \n")

"""
usage:
>>> python -m torch.distributed.launch --help
exmaple: 1 node, 4 GPUs per node (4GPUs)
>>> python -m torch.distributed.launch \
    --nproc_per_node=4 \
    --nnodes=1 \
    --node_rank=0 \
    --master_addr=localhost \
    --master_port=22222 \
    mnmc_ddp_launch.py
[init] == local rank: 3, global rank: 3 ==
[init] == local rank: 1, global rank: 1 ==
[init] == local rank: 0, global rank: 0 ==
[init] == local rank: 2, global rank: 2 ==
            =======  Training  ======= 
   == step: [ 25/49] [0/5] | loss: 1.980 | acc: 27.953%
   == step: [ 49/49] [0/5] | loss: 1.806 | acc: 33.816%
   == step: [ 25/49] [1/5] | loss: 1.464 | acc: 47.391%
   == step: [ 49/49] [1/5] | loss: 1.420 | acc: 48.448%
   == step: [ 25/49] [2/5] | loss: 1.300 | acc: 52.469%
   == step: [ 49/49] [2/5] | loss: 1.274 | acc: 53.648%
   == step: [ 25/49] [3/5] | loss: 1.201 | acc: 56.547%
   == step: [ 49/49] [3/5] | loss: 1.185 | acc: 57.360%
   == step: [ 25/49] [4/5] | loss: 1.129 | acc: 59.531%
   == step: [ 49/49] [4/5] | loss: 1.117 | acc: 59.800%
            =======  Training Finished  =======
exmaple: 1 node, 2tasks, 4 GPUs per task (8GPUs)
>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch \
    --nproc_per_node=4 \
    --nnodes=2 \
    --node_rank=0 \
    --master_addr="10.198.189.10" \
    --master_port=22222 \
    mnmc_ddp_launch.py
>>> CUDA_VISIBLE_DEVICES=4,5,6,7 python -m torch.distributed.launch \
    --nproc_per_node=4 \
    --nnodes=2 \
    --node_rank=1 \
    --master_addr="10.198.189.10" \
    --master_port=22222 \
    mnmc_ddp_launch.py
            =======  Training  ======= 
   == step: [ 25/25] [0/5] | loss: 1.932 | acc: 29.088%
   == step: [ 25/25] [1/5] | loss: 1.546 | acc: 43.088%
   == step: [ 25/25] [2/5] | loss: 1.424 | acc: 48.032%
   == step: [ 25/25] [3/5] | loss: 1.335 | acc: 51.440%
   == step: [ 25/25] [4/5] | loss: 1.243 | acc: 54.672%
            =======  Training Finished  =======
exmaple: 2 node, 8 GPUs per node (16GPUs)
>>> python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --nnodes=2 \
    --node_rank=0 \
    --master_addr="10.198.189.10" \
    --master_port=22222 \
    mnmc_ddp_launch.py
>>> python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --nnodes=2 \
    --node_rank=1 \
    --master_addr="10.198.189.10" \
    --master_port=22222 \
    mnmc_ddp_launch.py
[init] == local rank: 5, global rank: 5 ==
[init] == local rank: 3, global rank: 3 ==
[init] == local rank: 2, global rank: 2 ==
[init] == local rank: 4, global rank: 4 ==
[init] == local rank: 0, global rank: 0 ==
[init] == local rank: 6, global rank: 6 ==
[init] == local rank: 7, global rank: 7 ==
[init] == local rank: 1, global rank: 1 ==
            =======  Training  ======= 
   == step: [ 13/13] [0/5] | loss: 2.056 | acc: 23.776%
   == step: [ 13/13] [1/5] | loss: 1.688 | acc: 36.736%
   == step: [ 13/13] [2/5] | loss: 1.508 | acc: 44.544%
   == step: [ 13/13] [3/5] | loss: 1.462 | acc: 45.472%
   == step: [ 13/13] [4/5] | loss: 1.357 | acc: 49.344%
            =======  Training Finished  ======= 
"""
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值