# rocks list host boot #列出所有节点 # rocks set host boot compute-0-0 action=install # ssh compute-0-0 "shutdown -r now" shoot-node compute-0-0 如果集群中的节点机需要重新安装,可以在这个节点机上运行: /boot/kickstart/cluster-kickstart 来重装系统。 或者可以在Frontend节点机上运行: rocks run host '/boot/kickstart/cluster-kickstart' 来重新安装所有的compute节点机
添加用户: #useradd – #passwd – #rocks sync users
安装 1.Install and Configure Your Frontend 见手册
2.Install Your Compute Nodes Login to the frontend node as root. a)Run the program which captures compute node DHCP requests and puts their information into the Rocks b)MySQL database: c)# insert-ethers d)Power up the first compute node. #在此过程,用键盘与节点刀片机连接,出现滴的一声响后,立即按 F12。
After your frontend completes its installation, the last step is to force a re-installation of all of your compute nodes. The following will force a PXE (network install) reboot of all your compute nodes. # ssh-agent $SHELL # ssh-add # rocks run host compute ’/boot/kickstart/cluster-kickstart-pxe’
2.2hpc,Using mpi 2.1 Environment Modules for OpenMPI
To NOT load the Rocks default module Definition. Set the environment variable ROCKS_MODULE_USER_DEF to a non-zero string. export ROCKS_USER_MODULE_DEF=True
2.2. Using mpirun from OpenMPI To interactively launch a test OpenMPI program on two processors: • Create a file in your home directory named machines, and put two entries in it, such as: compute-0-0 compute-0-1 #可以将有用的刀片机的名字全加入 • Now launch the job from the frontend: $ ssh-agent $SHELL $ ssh-add $ /opt/openmpi/bin/mpirun -np 2 -machinefile machines /opt/mpi-tests/bin/mpi-ring #上面语句中的2表示使用machines中前两个机器 You must run MPI programs as a regular user (that is, not root).
2.3 Using mpirun from MPICH To interactively launch a test MPICH program on two processors: • Create a file in your home directory named machines, and put two entries in it, such as: compute-0-0 compute-0-1 • Compile a test program using the MPICH environment: $ cd $HOME $ mkdir mpich-test $ cd mpich-test $ cp /opt/mpi-tests/src/mpi-ring.c . $ /opt/mpich/gnu/bin/mpicc -o mpi-ring mpi-ring.c -lm • Now launch the job from the frontend: $ ssh-agent $SHELL $ ssh-add $ /opt/mpich/gnu/bin/mpirun -nolocal -np 2 -machinefile $HOME/machines \ $HOME/mpich-test/mpi-ring You must run MPI programs as a regular user (that is, not root). If you don’t have a user account on the cluster, create one for yourself, and propogate the information to the compute nodes with: # useradd username # rocks sync users
interactively launch the benchmark "High-Performance Linpack" (HPL) http://www.netlib.org/benchmark/hpl/tuning.html HPL.dat