单机多卡(gpu)DP和DDP简单操作个人记录

目录

torch.nn.DataParallel

API

参数 

举例

原理

保存和加载多GPU网络

torch.nn.parallel.DistributedDataParallel

原理

API

参数

步骤

举例


torch.nn.DataParallel

用于单机多GPU

输入计算是被几块卡均分的,但在进行梯度的传播时,是在主GPU上进行的,

缺点:GPU负载不均衡,(逻辑0卡默认为主GPU)主GPU的负载很大,而其他GPU的负载很少

API

#API
torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)

参数 

  • module:要放到多卡训练的模型;
  • device_ids:数据类型是一个列表, 表示可用的gpu卡号;
  • output_devices:数据类型也是列表,表示模型输出结果存放的卡号(如果不指定的话,默认放在0卡,这也是为什么多gpu训练并不是负载均衡的,一般0卡会占用的多
  • dim:其表示tensors被分散的维度,默认是0,nn.DataParallel将在dim0(批处理维度)中对数据进行分块

PS:如果程序开始加os.environ["CUDA_VISIBLE_DEVICES"] = "2, 3", 那么0卡(逻辑卡号)指的是2卡(物理卡号)

举例

#例子
import torch
import torch.nn as nn
os.environ['CUDA_VISIBLE_DEVICES']='2, 3' # 指定逻辑卡号。比如这样写的话,逻辑上第0块卡指的是物理上的第2块

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input = torch.randn(8, 4, 160, 160).to(device) # 将数据放到gpu上
model = nn.Conv2d(4, 3, kernel_size = 3, stride = 1, padding = 1)
model = model.to(device)# 将模型放到gpu上
output = model(input) 
model = torch.nn.DataParallel(model, device_ids=[0, 1])

原理

        网络在前向传播的时候会将model从主卡(默认是逻辑0卡)复制一份到所有的device上,input_data会在batch这个维度被分组后upload到不同的device上计算,网络的输出loss被gather到主cuda设备上,loss而后在这里被计算出来(求均值)。在反向传播时,loss然后被scatter到每个cuda设备上,每个cuda设备通过BP计算得到梯度,然后每个cuda设备上的梯度被汇总到主cuda设备上,求得梯度的均值后,主cuda反向传播更新参数。在下一次迭代之前,主cuda设备将模型参数broadcast到其它cuda设备上,完成权重参数值的同步

PS:loss求均值是否合理?每个GPU上的损失除以GPU上的批大小batchsize,再将几个均值取平均。但是当GPU数目不同时,最终得到的loss就不同了,因为每个GPU处理的batchsize不同

A batch of 3 would be calculated on a single GPU and results 
would be [0.3, 0.2, 0.8] and model that returns the loss would return 0.43.

If cast to DataParallel, and calculated on 2 GPUs, [GPU1 - batch 0,1], [GPU2 - batch 2] 
- return values would be [0.25, 0.8] (0.25 is average between 0.2 and 0.3)
- taking the average loss of [0.25, 0.8] is now 0.525!

Calculating on 3 GPUs, one gets [0.3, 0.2, 0.8] as results and average is back to 0.43!

        可以使用size_average=False,reduce=True作为参数。每个GPU上的损失将相加,但不除以GPU上的批大小batchsize。将所有平行损耗相加,除以整批总batchsize,那么不管几块GPU最终得到的平均loss都是一样的(即gather)

保存和加载多GPU网络

import torch
net = torch.nn.Linear(10,1)  # 先构造一个网络
net = torch.nn.DataParallel(net, device_ids=[0,3])  #包裹起来
torch.save(net.module.state_dict(), './networks/multiGPU.h5') #保存网络

# 加载网络
new_net = torch.nn.Linear(10,1)
new_net.load_state_dict(torch.load("./networks/multiGPU.h5"))

torch.nn.parallel.DistributedDataParallel

原理

        DDP在各进程梯度计算完成之,各进程需要将梯度进行汇总平均,然后再由 rank=0(主卡)的进程,将其 broadcast 到所有进程后,各进程用该梯度来独立的更新参数而 DP是梯度汇总到GPU0,反向传播更新参数,再广播参数给其他剩余的GPU。由于DDP各进程中的模型,初始参数一致 (初始时刻进行一次 broadcast),而每次用于更新参数的梯度也一致,因此,各进程的模型参数始终保持一致。而在DP中,全程维护一个 optimizer,对各个GPU上梯度进行求,而在主卡进行参数更新,之后再将模型参数 broadcast 到其他GPU.相较于DP, DDP传输的数据量更少,因此速度更快,效率更高

API

#API
torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, 
dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, 
find_unused_parameters=False, check_reduction=False)

参数

  • module:要放到多卡训练的模型;
  • device_ids:数据类型是一个列表, 表示可用的gpu卡号;
  • output_devices:数据类型也是列表,表示模型输出结果存放的卡号(如果不指定的话,默认放在0卡,这也是为什么多gpu训练并不是负载均衡的,一般0卡会占用的多
  • dim:其表示tensors被分散的维度,默认是0,nn.DataParallel将在dim0(批处理维度)中对数据进行分块

步骤

1.首先第一步就是要进行init_process_group的初始化,声明GPU的NCCL通信方式(官网还提供了"gloo"作为后端)

import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
torch.distributed.init_process_group(backend='nccl')

2.然后配置每个进程的GPU

local_rank = torch.distributed.get_rank()
print(local_rank)
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)

3.加载预训练model

model = AutoModelForMaskedLM.from_pretrained(model_path, trust_remote_code=True)
model.to(device) 

4.自定义数据集

class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size).to('cuda')
 
    def __getitem__(self, index):
        return self.data[index]
 
    def __len__(self):
        return self.len
 
dataset = RandomDataset(input_size, data_size)
# 3)使用DistributedSampler
rand_loader = DataLoader(dataset=dataset,
                         batch_size=batch_size,
                         sampler=DistributedSampler(dataset))

 4.创建DDP模型进行分布式训练

model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank, find_unused_parameters=True)  # 这句加载到多GPU上

 5.运行.py文件

python -m torch.distributed.launch --nproc_per_node=4 test.py
  • torch.distributed.launch:用于启动多节点多GPU的训练
  • nproc_per_node:表示每台主机设置的进程数量,一般情况设置为可用的GPU数量
  • nnodes:主机数

PS:在DataParallel中,batch_size设置必须为单卡的n倍, 但是在DistributedDataParallel内,batch_size设置于单卡一样即可

举例

from transformers import AutoTokenizer, AutoModelForMaskedLM
from torch.utils.data import Dataset, DataLoader
from torch.nn.parallel import DataParallel
import torch
import Bio
from Bio import SeqIO
import os
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
torch.distributed.init_process_group(backend='nccl')


# 设置CUDA_VISIBLE_DEVICES环境变量
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
local_rank = torch.distributed.get_rank()
print(local_rank)
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)


all_chunks = []
def read_in_chunks(file_object, chunk_size):
    """Lazy function (generator) to read a file piece by piece."""
    sequence = ''
    while True:
        identifier = file_object.readline()
        if not identifier:
            break
        sequence_line = file_object.readline().strip()
        plus_line = file_object.readline()
        quality_line = file_object.readline()
        sequence += sequence_line
        while len(sequence) >= chunk_size:
            yield sequence[:chunk_size]
            sequence = sequence[chunk_size:]
    if sequence:
        yield sequence

with open('/share/home/yzwl_wangzf/20240111/gunzip_test/yzwpsg000000002/yzwpsg000000002_1.clean.fq') as f:
    for chunk in read_in_chunks(f, 982560):
        all_chunks.append(chunk)

def get_chuncks(all_chunks):
    true_chunks = []
    for m in all_chunks:
        chunks = [m[i:i+12282] for i in range(0, len(m), 12282)]
        true_chunks.extend(chunks)
    return true_chunks

true_chunks = get_chuncks(all_chunks)
print(len(true_chunks))

# Import the tokenizer and the model
local_directory = 'model/'
torch.cuda.empty_cache()
tokenizer = AutoTokenizer.from_pretrained(local_directory, trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained(local_directory, trust_remote_code=True)
model.to(device) 

class TextDataset(Dataset):
    def __init__(self, texts, tokenizer):
        self.texts = texts
        self.tokenizer = tokenizer

    def __len__(self):
        return len(self.texts)

    def __getitem__(self, idx):
        text = self.texts[idx]
        tokens_ids = tokenizer(text, return_tensors="pt", padding="max_length", max_length = 2048, truncation=True)["input_ids"]
        attention_mask = tokens_ids != tokenizer.pad_token_id
        return tokens_ids, attention_mask

from torch.utils.data.distributed import DistributedSampler
dataset = TextDataset(true_chunks, tokenizer)
dataloader = DataLoader(dataset=dataset,batch_size=70,sampler=DistributedSampler(dataset))

model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank, find_unused_parameters=True)  # 这句加载到多GPU上


model.eval()
torch.cuda.empty_cache()

print(torch.cuda.device_count())
print(model)
#print(model.module.module.module)  # Assuming your model is named 'model'
#model = model.module.module.to(device)

def do_inference(model, dataloader):
    results = []
    for tokens_ids, attention_mask in dataloader:
        tokens_ids = tokens_ids.squeeze(1).to(device)
        attention_mask = attention_mask.squeeze(1).to(device)
        with torch.no_grad():
            torch_outs = model(
            tokens_ids,
            attention_mask=attention_mask,
            encoder_attention_mask=attention_mask,
            output_hidden_states=True
        )
        # Compute sequences embeddings
        embeddings = torch_outs['hidden_states'][-1].detach().cpu()
        bs, nt, ed = embeddings.shape
        embeddings = embeddings.reshape(bs*nt, ed)
        # Add embed dimension axis
        attention_mask = torch.unsqueeze(attention_mask, dim=-1).cpu()
        attention_mask = attention_mask.reshape(bs*nt, -1)
        # Compute mean embeddings per sequence
        mean_sequence_embeddings = torch.sum(attention_mask*embeddings, axis=-2)/torch.sum(attention_mask, axis=0)
        results.append(mean_sequence_embeddings.cpu())
    return results

results = do_inference(model, dataloader)
python -m torch.distributed.launch --nproc_per_node=4 model/test.py

参考:【深度学习】多卡训练__单机多GPU方法详解(torch.nn.DataParallel、torch.distributed)_import os os.environ["cuda_device_order"] = "pci_b-CSDN博客DataParallel & DistributedDataParallel分布式训练 - 知乎

pytorch的多GPU训练的两种方式_torch.distributed.get_rank()-CSDN博客

  • 21
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值