python tqdm pytorch_pytorch使用horovod多gpu训练的实现

pytorch在Horovod上训练步骤分为以下几步:

import torch

import horovod.torch as hvd

# Initialize Horovod 初始化horovod

hvd.init()

# Pin GPU to be used to process local rank (one GPU per process) 分配到每个gpu上

torch.cuda.set_device(hvd.local_rank())

# Define dataset... 定义dataset

train_dataset = ...

# Partition dataset among workers using DistributedSampler 对dataset的采样器进行调整,使用torch.utils.data.distributed.DistributedSampler

train_sampler = torch.utils.data.distributed.DistributedSampler(

train_dataset, num_replicas=hvd.size(), rank=hvd.rank())

train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler)

# Build model...

model = ...

model.cuda()

optimizer = optim.SGD(model.parameters())

# Add Horovod Distributed Optimizer 使用Horovod的分布式优化器函数包裹在原先optimizer上

optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters())

# Broadcast parameters from rank 0 to all other processes. 参数广播到每个gpu上

hvd.broadcast_parameters(model.state_dict(), root_rank=0)

for epoch in range(100):

for batch_idx, (data, target) in enumerate(train_loader):

optimizer.zero_grad()

output = model(data)

loss = F.nll_loss(output, target)

loss.backward()

optimizer.step()

if batch_idx % args.log_interval == 0:

print('Train Epoch: {} [{}/{}]\tLoss: {}'.format(

epoch, batch_idx * len(data), len(train_sampler), loss.item()))

完整示例代码如下,在imagenet上采用resnet50进行训练

from __future__ import print_function

import torch

import argparse

import torch.backends.cudnn as cudnn

import torch.nn.functional as F

import torch.optim as optim

import torch.utils.data.distributed

from torchvision import datasets, transforms, models

import horovod.torch as hvd

import os

import math

from tqdm import tqdm

from distutils.version import LooseVersion

# Training settings

parser = argparse.ArgumentParser(description='PyTorch ImageNet Example',

formatter_class=argparse.ArgumentDefaultsHelpFormatter)

parser.add_argument('--train-dir', default=os.path.expanduser('~/imagenet/train'),

help='path to training data')

parser.add_argument('--val-dir', default=os.path.expanduser('~/imagenet/validation'),

help='path to validation data')

parser.add_argument('--log-dir', default='./logs',

help='tensorboard log directory')

parser.add_argument('--checkpoint-format', default='./checkpoint-

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
安装pytorch-lightning时需要注意安装的torch版本要与pytorch-lightning兼容。如果你使用的是pip安装的torch,那么你也应该使用pip来安装pytorch-lightning。同样地,如果你使用的是conda安装的torch,那么你应该使用conda来安装pytorch-lightning。保持两者一致可以避免torch版本被替换的问题。 如果你通过pip直接安装pytorch-lightning,可能会导致torch被卸载并安装cpu版本的torch。为了避免这种情况,你可以使用以下命令安装pytorch-lightning: pip install pytorch-lightning 总结起来,安装pytorch-lightning的最简单方法是使用pip: pip install pytorch-lightning<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [关于安装 PyTorch-Lightning 的一些问题(GPU版)](https://blog.csdn.net/qq_60592939/article/details/129177520)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *3* [pytorch lightning最简上手](https://blog.csdn.net/weixin_44966641/article/details/127827124)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值