一、什么是分布式
1、模型并行
把复杂的神经网络进行拆分,分布在GPU里面进行训练,让每个GPU同步进行计算。这个方法通常用在模型比较复杂的情况下,但效率会有折扣。
2、数据并行
即让每个机器里都有一个完整模型,然后把数据切分成n块,把n块分发给每个计算单元,每个计算单元独自计算出自己的梯度。同时每个计算单元的梯度会进行平均、同步,同步后的梯度可以在每个节点独立去让它修正模型,整个过程结束后每个节点会得到同样的模型。这个方法可以让能够处理的数据量增加,变成了原来的n倍。
Horovod pytorch 安装
https://horovod.readthedocs.io/en/latest/conda.html
参考:https://horovod.readthedocs.io/en/latest/install_include.html
实例代码
https://github.com/horovod/horovod/blob/master/examples/pytorch/pytorch_mnist.py
import torch
import horovod.torch as hvd
# Initialize Horovod
hvd.init()
# Pin GPU to be used to process local rank (one GPU per process)
torch.cuda.set_device(hvd.local_rank())
# Define dataset...
train_dataset = ...
# Partition dataset among workers using DistributedSampler
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset, num_replicas=hvd.size(), rank=hvd.rank())
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler)
# Build model...
model = ...
model.cuda()
optimizer = optim.SGD(model.parameters())
# Add Horovod Distributed Optimizer
optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters())
# Broadcast parameters from rank 0 to all other processes.
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
for epoch in range(100):
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{}]\tLoss: {}'.format(
epoch, batch_idx * len(data), len(train_sampler), loss.item()))
https://zhuanlan.zhihu.com/p/264778072
MPI 命令
https://blog.51cto.com/doublelinux/1947869