ubuntu 配置 Horovod环境

最近在学习深度学习,导师叫我看看horovod,并自己配一下horovod的环境,看了网上很多很多的文章,自己也配了很多次,甚至把实验室的服务器给配崩了/(ㄒoㄒ)/~~。

得出了一个血一般的教训:主要看官方的文档,百度的教程只是作为辅助

Horovod官方GitHUb配置网站:https://github.com/horovod/horovod/blob/master/docs/install.rst

Horovod官方gpu配置教程:https://github.com/horovod/horovod/blob/master/docs/gpus.rst

希望大家可以先看看官方的教程,以下仅为我个人的总结。

配置那么多次,现在我来总结一下怎么配置(以下默认安装为最新版本当前时间2021.8.3):

horovod在CPU上使用

1、 配置gcc

​
apt-get install gcc

2、配置g++

apt-get install g++

3、确认当前python为3.6版本或以上(怎么安装3.6以上的自己百度)

4、pip下载CMake

pip install cmake

5、pip下载tensorflow(如果安装了其他版本记得pip uninstall掉)

pip install tensorflow

6、pip下载horovod

HOROVOD_WITH_TENSORFLOW=1 pip install horovod[tensorflow]

7、检查horovod是否在tensorflow环境下安装成功(有X表面该环境已配置)

horovodrun --check-build

8、以下给出官方的horovod示例(代码来源:https://github.com/horovod/horovod/blob/master/examples/tensorflow2/tensorflow2_keras_mnist.py我将该实例命名为 main.py

其实这个实例是可以在gpu上运行的,但是在没有配置gpu环境下会默认CPU下执行

import tensorflow as tf
import horovod.tensorflow.keras as hvd

# Horovod: initialize Horovod.
hvd.init()

# Horovod: pin GPU to be used to process local rank (one GPU per process)
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)
if gpus:
    tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')

(mnist_images, mnist_labels), _ = \
    tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())

dataset = tf.data.Dataset.from_tensor_slices(
    (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),
             tf.cast(mnist_labels, tf.int64))
)
dataset = dataset.repeat().shuffle(10000).batch(128)

mnist_model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),
    tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
    tf.keras.layers.Dropout(0.25),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(10, activation='softmax')
])

# Horovod: adjust learning rate based on number of GPUs.
scaled_lr = 0.001 * hvd.size()
opt = tf.optimizers.Adam(scaled_lr)

# Horovod: add Horovod DistributedOptimizer.
opt = hvd.DistributedOptimizer(
    opt, backward_passes_per_step=1, average_aggregated_gradients=True)

# Horovod: Specify `experimental_run_tf_function=False` to ensure TensorFlow
# uses hvd.DistributedOptimizer() to compute gradients.
mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),
                    optimizer=opt,
                    metrics=['accuracy'],
                    experimental_run_tf_function=False)

callbacks = [
    # Horovod: broadcast initial variable states from rank 0 to all other processes.
    # This is necessary to ensure consistent initialization of all workers when
    # training is started with random weights or restored from a checkpoint.
    hvd.callbacks.BroadcastGlobalVariablesCallback(0),

    # Horovod: average metrics among workers at the end of every epoch.
    #
    # Note: This callback must be in the list before the ReduceLROnPlateau,
    # TensorBoard or other metrics-based callbacks.
    hvd.callbacks.MetricAverageCallback(),

    # Horovod: using `lr = 1.0 * hvd.size()` from the very beginning leads to worse final
    # accuracy. Scale the learning rate `lr = 1.0` ---> `lr = 1.0 * hvd.size()` during
    # the first three epochs. See https://arxiv.org/abs/1706.02677 for details.
    hvd.callbacks.LearningRateWarmupCallback(initial_lr=scaled_lr, warmup_epochs=3, verbose=1),
]

# Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them.
if hvd.rank() == 0:
    callbacks.append(tf.keras.callbacks.ModelCheckpoint('./checkpoint-{epoch}.h5'))

# Horovod: write logs on worker 0.
verbose = 1 if hvd.rank() == 0 else 0

# Train the model.
# Horovod: adjust number of steps based on number of GPUs.
mnist_model.fit(dataset, steps_per_epoch=500 // hvd.size(), callbacks=callbacks, epochs=24, verbose=verbose)

9、运行实例

horovodrun -np 4 -H localhost:4 python main.py

10、效果(其中可以看到CPU字样且无报错)

 horovod在GPU上使用

Nccl的配置

1、查看显卡CUDA的版本

nvidia-smi

2、NCCl的配置(官网下载地址:https://developer.nvidia.com/nccl/nccl-download)

我的CUDA版本是11.0因此下载11.0版本的,建议下载图中我选的那个,当然也可以按照下面官网的教程来弄。

 

4、将 下载好的nccl_2.10.3-1+cuda11.0_x86_64.txz解压

tar xvf nccl_2.10.3-1+cuda11.0_x86_64.txz

 5、移动解压后的文件到/usr/local目录下

mv nccl_2.10.3-1+cuda11.0_x86_64 /usr/local

6、进入该目录

cd /use/local

7、给该nccl配置一下环境

vim ~/.bashrc

*按一下a键

在最后添加下面的语句

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/nccl_2.10.3-1+cuda11.0_x86_64/lib

Openmpi的配置:在/usr/local路径下(官网下载地址:https://www.open-mpi.org/software/ompi/v4.1/)(官网教程(下面的命令行就是按照官网的来的):https://www.open-mpi.org/faq/?category=building#easy-build)

1、下载openmpi

wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.1.tar.gz

2、解压该压缩包

tar zxvf openmpi-4.1.1.tar.gz

3、进入该目录下

cd openmpi-4.1.1

4、将该openmpi配置在/usr/local路径下

./configure --prefix=/usr/local 

5、下载

make all install

Option(可选):以下为mpirun环境配置

1、打开bashrc文件夹

vim ~/.bashrc 

2、在最后添加下面两条语句

export PATH="$PATH:/usr/local/openmpi-4.1.1/bin" 

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/openmpi-4.1.1/lib/" 

3、如果你的openmpi不在/usr/local的话

mv openmpi-4.1.1 /usr/local

4、更新环境配置

sudo ldconfig

5、测试一下mpirun

mpirun

安装tensorflow(如果进行了上面cpu的配置请忽略)

apt-get install gcc

apt-get install g++

pip install tensorflow

安装Horovod pip包

HOROVOD_WITH_TENSORFLOW=1 pip install horovod[tensorflow]

HOROVOD_NCCL_HOME=/usr/local/nccl_2.10.3-1+cuda11.0_x86_64 HOROVOD_GPU_OPERATIONS=NCCL pip install --no-cache-dir horovod

运行实例代码(实例与上方cpu实例一样)

horovodrun -np 4 -H localhost:4 python main.py

效果图

  • 3
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值