Run the MPI PingPong benchmark

We will use the MPI PingPong benchmark for our testing. By default, openmpi should use inifiniband networks in preference to any tcp networks it finds. However, we will force mpi to ignore tcp networks to ensure that is using the infiniband network.

#!/bin/bash
#Infiniband MPI test program
#Edit the hosts below to match your test hosts
cat > /tmp/hostfile.$$.mpi <<EOF
hostA slots=1
HostB slots=1
EOF

mpirun --mca btl_openib_verbose 1 --mca btl ^tcp -n 2 -hostfile /tmp/hostfile.$$.mpi IMB-MPI1 PingPong

If all goes well you should see openib debugging messages from both hosts, together with the job output.

<snip>
# PingPong
[HostB][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] Set MTU to IBV value 4 (2048 bytes)
[HostB][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] Set MTU to IBV value 4 (2048 bytes)
[HostA][0,1,0][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] Set MTU to IBV value 4 (2048 bytes)
[HostA][0,1,0][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] Set MTU to IBV value 4 (2048 bytes)

#---------------------------------------------------
# Benchmarking PingPong 
# #processes = 2 
#---------------------------------------------------
       #bytes #repetitions      t[usec]   Mbytes/sec
            0         1000         1.53         0.00
            1         1000         1.44         0.66
            2         1000         1.42         1.34
            4         1000         1.41         2.70
            8         1000         1.48         5.15
           16         1000         1.50        10.15
           32         1000         1.54        19.85
           64         1000         1.79        34.05
          128         1000         3.01        40.56
          256         1000         3.56        68.66
          512         1000         4.46       109.41
         1024         1000         5.37       181.92
         2048         1000         8.13       240.25
         4096         1000        10.87       359.48
         8192         1000        15.97       489.17
        16384         1000        30.54       511.68
        32768         1000        55.01       568.12
        65536          640       122.20       511.46
       131072          320       207.20       603.27
       262144          160       377.10       662.96
       524288           80       706.21       708.00
      1048576           40      1376.93       726.25
      2097152           20      1946.00      1027.75
      4194304           10      3119.29      1282.34

If you encounter any errors read the excellent OpenMPI troubleshooting guide. http://www.openmpi.org

If you want to compare infiniband performance with your ethernet/TCP networks, you can re-run the tests using flags to tell openmpi to use your ethernet network. (The example below assumes that your test nodes are connected via eth0).

#!/bin/bash
#TCP MPI test program
#Edit the hosts below to match your test hosts
cat > /tmp/hostfile.$$.mpi <<EOF
hostA slots=1
HostB slots=1
EOF
mpirun --mca btl ^openib --mca btl_tcp_if_include eth0 --hostfile hostfile -n 2 IMB-MPI1 -benchmark PingPong

You should notice signficantly higher latencies than for the infiniband test.


Reprinted from:http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-6.html#ss6.6


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值