HPL 源码结构分析

文件夹结构:

$ cd /home/hipper/ex_hpl_hpcg/

$ pwd

$ mkdir ./openmpi

$mkdir ./openblas

$mkdir ./hpl

$ tree

1. 安装openmpi

1.1.1 使用Makefile下载配置编译安装 openmpi

Makefile:

all:
        wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.6.tar.gz && \
        tar zxf openmpi-4.1.6.tar.gz && \
        cd openmpi-4.1.6/ && \
        ./configure --enable-mpi-cxx --with-cuda --prefix=${PWD}/local/ 2>&1 | tee config.out && \
        make -j all 2>&1 | tee make.out

install:
        make -C openmpi-4.1.6/ install 2>&1 | tee install.out

.PHONY: clean
clean:
        -rm -rf ./openmpi-4.1.6/ ./openmpi-4.1.6.tar.gz
$ make
$ make install

$ ls

1.1.2 注意事项

如果 是 openmpi 5.0及其以上,则./configure 时不能要选项:

--enable-mpi-cxx

mpi 以后仅支持使用相对广泛很多的 C 语言风格的 API,不再支持 C++ 风格的使用方式,虽然有在用,但很不主流,C 风格 api 足够强大,足够使用。

无论4.x还是5.x,如果没有安装cuda,则 ./configure 时不需要该项:

--with-cuda

而,指定安装目录时需要指定绝对路径,例如:

--prefix=${PWD}/local/

或者不使用 Makefile 方式,而使用命令行安装 openmpi:

$ wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.6.tar.g
$ tar zxf openmpi-4.1.6.tar.gz
$ cd openmpi-4.1.6/

其中 configure 选项 --prefix=/.../ 需要使用绝对路径,例如:
$ ./configure --enable-mpi-cxx --with-cuda --prefix=/home/hipper/ex_hpl/openmpi/local/ 2>&1 | tee config.out
$ make -j all 2>&1 | tee make.out
$ make install 2>&1 | tee install.out

1.1.3 配置环境变量

 export PATH=/home/hipper/ex_openmpi/local/bin:$PATH
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/hipper/ex_openmpi/local/lib
 
 cd examples
 make
 mpirun -np 7 hello_c

2. 安装 openblas

 
all:
	wget https://github.com/OpenMathLib/OpenBLAS/archive/refs/tags/v0.3.27.tar.gz
	tar zxf v0.3.27.tar.gz 
	make -C OpenBLAS-0.3.27 FC=gfortran -j
 
install:
	make -C OpenBLAS-0.3.27 install PREFIX=../local/
#PREFIX=/opt/OpenBLAS
# PREFIX=../local/
 
clean:
	-rm -rf ./local/ ./OpenBLAS-0.3.27/ ./v0.3.27.tar.gz

$ make
$ make install
$ ls

3. 配置编译运行 hpl

1.3.1 下载 hpl 源代码

$ wget https://www.netlib.org/benchmark/hpl/hpl-2.3.tar.gz
$ tar zxf hpl-2.3.tar.gz
$ cd hpl-2.3/

1.3.2 配置

$ cd hpl-2.3/
$ hpl-2.3$ cp ./setup/Make.Linux_PII_CBLAS ./

vim ./Make.Linux_PII_CBLAS

# ######################################################################
#
# ----------------------------------------------------------------------
# - shell --------------------------------------------------------------
# ----------------------------------------------------------------------
#
SHELL        = /bin/sh
#
CD           = cd
CP           = cp
LN_S         = ln -s
MKDIR        = mkdir
RM           = /bin/rm -f
TOUCH        = touch
#
# ----------------------------------------------------------------------
# - Platform identifier ------------------------------------------------
# ----------------------------------------------------------------------
#
ARCH         = Linux_PII_CBLAS
#
# ----------------------------------------------------------------------
# - HPL Directory Structure / HPL library ------------------------------
# ----------------------------------------------------------------------
#
#/home/hipper/ex_hpl_hpcg/hpl/hpl-2.3
TOPdir       = $(HOME)/ex_hpl_hpcg/hpl/hpl-2.3
INCdir       = $(TOPdir)/include
BINdir       = $(TOPdir)/bin/$(ARCH)
LIBdir       = $(TOPdir)/lib/$(ARCH)
#
HPLlib       = $(LIBdir)/libhpl.a
#
# ----------------------------------------------------------------------
# - Message Passing library (MPI) --------------------------------------
# ----------------------------------------------------------------------
# MPinc tells the  C  compiler where to find the Message Passing library
# header files,  MPlib  is defined  to be the name of  the library to be
# used. The variable MPdir is only used for defining MPinc and MPlib.
#
#MPdir        = /usr/local/mpi
MPdir        = $(HOME)/ex_hpl_hpcg/openmpi/local
MPinc        = -I$(MPdir)/include
MPlib        = $(MPdir)/lib/libmpi.so
#
# ----------------------------------------------------------------------
# - Linear Algebra library (BLAS or VSIPL) -----------------------------
# ----------------------------------------------------------------------
# LAinc tells the  C  compiler where to find the Linear Algebra  library
# header files,  LAlib  is defined  to be the name of  the library to be
# used. The variable LAdir is only used for defining LAinc and LAlib.
#
#LAdir        = $(HOME)/netlib/ARCHIVES/Linux_PII
LAdir        = $(HOME)/ex_hpl_hpcg/openblas/local
LAinc        =
LAlib        = $(LAdir)/lib/libopenblas.a
#
# ----------------------------------------------------------------------
# - F77 / C interface --------------------------------------------------
# ----------------------------------------------------------------------
# You can skip this section  if and only if  you are not planning to use
# a  BLAS  library featuring a Fortran 77 interface.  Otherwise,  it  is
# necessary  to  fill out the  F2CDEFS  variable  with  the  appropriate
# options.  **One and only one**  option should be chosen in **each** of
# the 3 following categories:
#
# 1) name space (How C calls a Fortran 77 routine)
#
# -DAdd_              : all lower case and a suffixed underscore  (Suns,
#                       Intel, ...),                           [default]
# -DNoChange          : all lower case (IBM RS6000),
# -DUpCase            : all upper case (Cray),
# -DAdd__             : the FORTRAN compiler in use is f2c.
#
# 2) C and Fortran 77 integer mapping
#
# -DF77_INTEGER=int   : Fortran 77 INTEGER is a C int,         [default]
# -DF77_INTEGER=long  : Fortran 77 INTEGER is a C long,
# -DF77_INTEGER=short : Fortran 77 INTEGER is a C short.
#
# 3) Fortran 77 string handling
#
# -DStringSunStyle    : The string address is passed at the string loca-
#                       tion on the stack, and the string length is then
#                       passed as  an  F77_INTEGER  after  all  explicit
#                       stack arguments,                       [default]
# -DStringStructPtr   : The address  of  a  structure  is  passed  by  a
#                       Fortran 77  string,  and the structure is of the
#                       form: struct {char *cp; F77_INTEGER len;},
# -DStringStructVal   : A structure is passed by value for each  Fortran
#                       77 string,  and  the  structure is  of the form:
#                       struct {char *cp; F77_INTEGER len;},
# -DStringCrayStyle   : Special option for  Cray  machines,  which  uses
#                       Cray  fcd  (fortran  character  descriptor)  for
#                       interoperation.
#
F2CDEFS      =
#
# ----------------------------------------------------------------------
# - HPL includes / libraries / specifics -------------------------------
# ----------------------------------------------------------------------
#
HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) -lpthread
HPL_LIBS     = $(HPLlib) $(LAlib) $(MPlib) -lpthread
#
# - Compile time options -----------------------------------------------
#
# -DHPL_COPY_L           force the copy of the panel L before bcast;
# -DHPL_CALL_CBLAS       call the cblas interface;
# -DHPL_CALL_VSIPL       call the vsip  library;
# -DHPL_DETAILED_TIMING  enable detailed timers;
#
# By default HPL will:
#    *) not copy L before broadcast,
#    *) call the BLAS Fortran 77 interface,
#    *) not display detailed timing information.
#
HPL_OPTS     = -DHPL_CALL_CBLAS
#
# ----------------------------------------------------------------------
#
HPL_DEFS     = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES)
#
# ----------------------------------------------------------------------
# - Compilers / linkers - Optimization flags ---------------------------
# ----------------------------------------------------------------------
#
CC           = $(HOME)/ex_hpl_hpcg/openmpi/local/bin/mpicc
CCNOOPT      = $(HPL_DEFS)
CCFLAGS      = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops
#
# On some platforms,  it is necessary  to use the Fortran linker to find
# the Fortran internals used in the BLAS library.
#
LINKER       = $(HOME)/ex_hpl_hpcg/openmpi/local/bin/mpif77
LINKFLAGS    = $(CCFLAGS)
#
ARCHIVER     = ar
ARFLAGS      = r
RANLIB       = echo
#
# ----------------------------------------------------------------------

前后区别:

$ diff Make.Linux_PII_CBLAS setup/Make.Linux_PII_CBLAS 

hipper@hipper-G21:~/ex_hpl_hpcg/hpl/hpl-2.3$ diff Make.Linux_PII_CBLAS setup/Make.Linux_PII_CBLAS
70,71c70
< #/home/hipper/ex_hpl_hpcg/hpl/hpl-2.3
< TOPdir       = $(HOME)/ex_hpl_hpcg/hpl/hpl-2.3
---
> TOPdir       = $(HOME)/hpl
85,86c84
< #MPdir        = /usr/local/mpi
< MPdir        = /home/hipper/ex_hpl_hpcg/openmpi/local
---
> MPdir        = /usr/local/mpi
88c86
< MPlib        = $(MPdir)/lib/libmpi.so
---
> MPlib        = $(MPdir)/lib/libmpich.a
97,98c95
< #LAdir        = $(HOME)/netlib/ARCHIVES/Linux_PII
< LAdir        = $(HOME)/ex_hpl_hpcg/openblas/local
---
> LAdir        = $(HOME)/netlib/ARCHIVES/Linux_PII
100c97
< LAlib        = $(LAdir)/lib/libopenblas.a
---
> LAlib        = $(LAdir)/libcblas.a $(LAdir)/libatlas.a
147,148c144,145
< HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) -lpthread
< HPL_LIBS     = $(HPLlib) $(LAlib) $(MPlib) -lpthread
---
> HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc)
> HPL_LIBS     = $(HPLlib) $(LAlib) $(MPlib)
172c169
< CC           = $(HOME)/ex_hpl_hpcg/openmpi/local/bin/mpicc
---
> CC           = /usr/bin/gcc
179c176
< LINKER       = $(HOME)/ex_hpl_hpcg/openmpi/local/bin/mpif77
---
> LINKER       = /usr/bin/g77
hipper@hipper-G21:~/ex_hpl_hpcg/hpl/hpl-2.3$

1.3.3编译运行

make arch=Linux_PII_CBLAS

$ ls

单节点运行:

$ /home/hipper/ex_hpl_hpcg/openmpi/local/bin/mpirun -np 4 ./xhpl > HPL-Benchmark.txt
效果:
多节点:
mpirun -np 8 --host node1,node2 ./xhpl > HPL-Benchmark.txt

生成HPL.dat文件的网址:

https://www.advancedclustering.com/act_kb/tune-hpl-dat-file
其中,mpirun 的参数 np 值需要跟 HPL.dat 的 (P x Q) 相等,通常等于整个集群系统的cpu 的core的个数。

比如四个节点,每个节点10个cpu core的集群:

讲下方内容复制保存为一个 HPL.dat文件,供测试使用

2,MPI 函数的功能

简单梳理一下 MPI 函数的功能

MPI程序初始化
int MPI_Init(int *argc, char*** argv)
argc  变量数目
argv  变量数组

结束并行:
int MPI_finalize()

获取当前进程在通信域中的标识
int MPI_Comm_rank(MPI_Comm comm ,int* rank)
comm 该进程所在的通信域句柄
rank 调用这一函数的进程在通信域中的标识号

获取通信域包含的进程总数:
int MPI_Comm_size(MPI_Comm comm ,int* size)
comm 通信域句柄
size 通信域comm内包含的进程总数


发送消息:
int MPI_Send(void* buf , int cout , MPI_Datatype datatype , int dest , int tag , MPI_Comm comm)
buf 发送缓冲区的起始地址(可选类型)
count 将发送的数据个数(非负整数)
datatype 发送数据的数据类型(句柄)
dest 目的进程标识号(整型)
tag 消息标志(整型)
comm 通信域(句柄)


接受消息:
int MPI_Send(void* buf , int cout , MPI_Datatype datatype , int dest , int tag , MPI_Comm comm)
buf 发送缓冲区的起始地址(可选类型)
count 将发送的数据个数(非负整数)
datatype 发送数据的数据类型(句柄)
dest 目的进程标识号(整型)
tag 消息标志(整型)
comm 通信域(句柄)


计时功能:
double MPI_Wtime(void)
示例:
double starttime , endtime;
...
starttime = MPI_Wtime()
需计时部分:
endtime = MPI_Wtime()
时间:endtime - starttime

同步屏障:
int MPI_Barrier(MPI_Comm comm)
comm 通信域(句柄)
MPI_Barrier阻塞通信域中所有调用了本函数的进程,直到所有的调用者都调用了它,进程中的调用才可以返回,该函数用于对各进程实施同步


数据规约函数
int MPI_Reduce(void* sendbuf , void* recvbuf , int count , MPI_Datatype datatype, MPI_Op op , int root , MPI_Comm comm)
sendbuf 发送消息缓冲区的起始地址(可选数据类型)
recv 接收消息缓冲区的地址(可选数据类型)
count 发送消息缓冲区的数据个数(整型)
datatype 发送消息缓冲区的元素类型(句柄)
op 规约操作符(句柄)
root 根进程序列号(整型)
comm 通信域(句柄)

2,hpl 源码框架分析

./xhpl 的main 函数在文件 hpl-2.3/testing/ptest/HPL_pddriver.c 中。

先保存备忘,待续。。。。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值