FVCOM 运行

  1. Compile
# go to the path contains the code
cd /pexue6/chuyan/glerl/code/src

# check if the compiler is available
module list

it will show as below

[czhao4@compute-0-201 src]$ module list

Currently Loaded Modules:
  1) users/2021.08   2) tools/EasyBuild/4.7.2

load intel fortran

# load intel fortran
module load toolchain/intel/2018b
#or
module load intel/2016.1
# then check if it is loaded
module list

it will show as below

[czhao4@compute-0-201 src]$ module list

Currently Loaded Modules:
  1) users/2021.08                            7) compiler/ifort/2018.3.222-GCC-7.3.0-2.30
  2) tools/EasyBuild/4.7.2                    8) toolchain/iccifort/2018.3.222-GCC-7.3.0-2.30
  3) compiler/GCCcore/7.3.0                   9) mpi/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30
  4) lib/zlib/1.2.11-GCCcore-7.3.0           10) toolchain/iimpi/2018b
  5) tools/binutils/2.30-GCCcore-7.3.0       11) numlib/imkl/2018.3.222-iimpi-2018b
  6) compiler/icc/2018.3.222-GCC-7.3.0-2.30  12) toolchain/intel/2018b

p.s. the following command can be used to check what modules are available

module avail
  1. vi make.inc

Modify path, to show the source code location

#========== TOPDIR ========================================================
# TOPDIR is the directory in which this make file and the fvcom source reside

#           TOPDIR  = /mnt/projects/hpc/code/fvcom/FVCOM4.3.1/src
           TOPDIR  = /pexue6/chenfuh/gls_DA_NOAA/code/src

Modify netcdf

###do not use -lnetcdff on new bear or rhino2###            IOLIBS       =  -lnetcdff -lnetcdf #-lhdf5_hl -lhdf5 -lz -lcurl -lm
#             IOLIBS       =  -lnetcdf #-lhdf5_hl -lhdf5 -lz -lcurl -lm
##             IOLIBS       =  -lnetcdf #-L/hosts/mao/usr/medm/install/netcdf/3.6.3/em64t/lib -lnetcdf
#             IOINCS       =  #-I/hosts/mao/usr/medm/install/netcdf/3.6.3/em64t/include
##             IOLIBS       =  -L/usr/local/install/netcdf/gcc_ifort/3.6.2/lib  -lnetcdf
##             IOINCS       =  -I/usr/local/install/netcdf/gcc_ifort/3.6.2/include
##       IOLIBS       =  -L/pexue3/pexue/local/netcdf4.2.1.1/lib -lnetcdff -lnetcdf
##       IOINCS       =  -I/pexue3/pexue/local/netcdf4.2.1.1/include
       IOLIBS       =  -L/pexue3/pexue/local/netcdf3.6.3/lib -lnetcdf
       IOINCS       =  -I/pexue3/pexue/local/netcdf3.6.3/include

and also modify the flags below.
in which, FC need to be modified to mpiifort.

#  Intel/MPI Compiler Definitions (SMAST)
#--------------------------------------------------------------------------
         CPP      = /usr/bin/cpp
         COMPILER = -DIFORT
         CC       = mpicc
         CXX      = mpicxx
         CFLAGS   = -O3
#         FC       = mpif90
         FC       = mpiifort
#         DEBFLGS  = -check all -traceback
# Use 'OPT = -O0 -g'  for fast compile to test the make
# Use 'OPT = -xP' for fast run on em64t (Hydra and Guppy)
# Use 'OPT = -xN' for fast run on ia32 (Salmon and Minke)
#         OPT      = -O0 -g
#         OPT      = -axN -xN
         OPT      = -O3

# Do not set static for use with visit!
#         VISOPT   = -Wl,--export-dynamic
#         LDFLAGS  =  $(VISITLIBPATH)

Compile.

# if make.inc was modified, then we need make clean first; if not, make clean is not necessary. Always make clean first will ensure no error, but may take longer time to compile.
make clean
# compile 
make
  1. Set up a run folder for the job (i.e., test_run01), and put executable file (i.e., fvcom), namelist file (i.e., superior_run.nml), set up an input folder for input files (i.e., superior_xxx.dat files and the atmospheric forcing file), and also set up an output folder.
cd /pexue6/chuyan/glerl
mkdir test_run01

If you get the directory from others, use rsync -ave ssh instead of cp
e.g.

rsync -ave ssh /pexue4/xinyuy/DA_run_sst_assim_adjust_Lake_Erie/year_1995 --exclude output /pexue6/chuyan/DA_xinyu_example/test_Erie/year_2018

[czhao4@compute-0-201 glerl]$ cd /pexue6/chuyan/glerl/test_run01
[czhao4@compute-0-201 test_run01]$ pwd
/pexue6/chuyan/glerl/test_run01
[czhao4@compute-0-201 test_run01]$ ls
fvcom  fvcom.sh  fvcom.sh.o95517  fvcom.sh.po95517  input  output  run.sh  superior_run.nml
[czhao4@compute-0-201 test_run01]$ cd /pexue6/chuyan/glerl/test_run01/input/
[czhao4@compute-0-201 input]$ pwd
/pexue6/chuyan/glerl/test_run01/input
[czhao4@compute-0-201 input]$ ls
met-s3-2019xxxx-hrrr-coare_nswrf-regular.nc  superior_cor.dat  superior_grd.dat    superior_spg.dat
superior_2019_restart.nc                     superior_dep.dat  superior_sigma.dat

  1. Modify the namelist file.
vi superior_run.nml
  1. Check if the code can run normally, we can use
cd /pexue6/chuyan/test_run01
source run.sh # mpirun -n 2 ./fvcom --casename=superior

This way can only run under 4 hours, because our account can stay login for only 4 hours. If we need to run a code for more than 4 h, we can use next step.

  1. Generate a script for HPC
qgenscript 


Rocks Compute Node
Rocks 7.0 (Manzanita)
Profile built 21:12 18-Oct-2021

Kickstarted 21:24 18-Oct-2021

                           Available software suites

  [  1] Abaqus 2022                                                   (p,*)
  [  2] CALYPSO 7.0 + Gaussian 16 A.03 (Dr. Pandey)                   (p,*)
  [  3] CALYPSO 7.0 + VASP 5.4.4 (Dr. Pandey)                         (p,*)
  [  4] COMSOL 5.3a                                                   (p,*)
  [  5] COMSOL 5.3a + MATLAB R2017a                                   (p,*)
  [  6] COMSOL 5.4 (Dr. Minerick)                                     (p,*)
  [  7] CONVERGE 2.4.28 (Dr. Shahbakhti)                              (p,*)
  [  8] Dalton 2018.2                                                 (p)
  [  9] Gaussian 09 Revision D.01                                     (s)
  [ 10] Gaussian 09 Revision D.01                                     (p)
  [ 11] Gaussian 16 Revision A.03                                     (s)
  [ 12] Gaussian 16 Revision A.03                                     (p)
  [ 13] GROMACS 5.0.7                                                 (p)
  [ 14] LAMMPS 2018.06.21 (Stable)                                    (p)
  [ 15] LAMMPS 2018.06.21 (Stable + Intel + OMP)                      (p)
  [ 16] LAMMPS 2018.06.21 (Stable + Intel + OMP w/ Intel 2022a)       (p)
  [ 17] LAMMPS 2018.06.21 (Unstable; Dr. Odegard)                     (p,*)
  [ 18] LAMMPS 2018.06.21 (Unstable + Intel + OMP; Dr. Odegard)       (p,*)
  [ 19] LAMMPS 2020.10.20 (Stable)                                    (p)
  [ 20] LAMMPS 2020.10.20 (Stable; IntelOMP)                          (p)
  [ 21] LAMMPS 2020.10.29 (w/ Python; Drs. Ghosh & Odegard)           (p)
  [ 22] LAMMPS 2022.04.07 (Stable)                                    (p)
  [ 23] LAMMPS 2022.04.07 (Stable; Morse Bond; Dr. Odegard)           (p,*)
  [ 24] LAMMPS 2022.04.07 (Stable; Stillinger Weber; Dr. Ghosh)       (p,*)
  [ 25] LSDalton 2020.1                                               (p)
  [ 26] MATLAB R2017a                                                 (s)
  [ 27] MATLAB R2017a                                                 (p)
  [ 28] MATLAB R2021a                                                 (s)
  [ 29] MATLAB R2021a                                                 (p)
  [ 30] Multiwfn 3.8 dev                                              (p)
  [ 31] NAMD 2.11                                                     (p)
  [ 32] NASA MAC4Z 3.10 (Dr. Odegard)                                 (s,*)
  [ 33] OpenFOAM 1912 (Dr. Masoud)                                    (s,*)
  [ 34] OpenFOAM 1912 (Dr. Masoud)                                    (p,*)
  [ 35] ORCA 5.0.3                                                    (s)
  [ 36] ORCA 5.0.3                                                    (p)
  [ 37] PyLAMMPS 2021.06.30 (Python + LAMMPS; Dr. Odegard)            (s,*)
  [ 38] PyLAMMPS 2021.06.30 (Python + LAMMPS; Dr. Odegard)            (p,*)
  [ 39] Quantum ESPRESSO 6.8                                          (p)
  [ 40] Quantum ESPRESSO 6.8 (Phonon/RandomMatrix; Dr. Pandey)        (p)
  [ 41] R 4.1.2                                                       (s)
  [ 42] R 4.2.2                                                       (s)
  [ 43] SIESTA 4.1-b4                                                 (p,*)
  [ 44] SIESTA 4.1.5                                                  (p,*)
  [ 45] VASP 5.2.12                                                   (p,*)
  [ 46] VASP 5.2.12 (Molecular Dynamics)                              (p,*)
  [ 47] VASP 5.3.3                                                    (p,*)
  [ 48] VASP 5.3.3 (Implicit Solvation Model)                         (p,*)
  [ 49] VASP 5.3.3 (Molecular Dynamics)                               (p,*)
  [ 50] VASP 5.3.3 (Non-Collinear Spin, NCS)                          (p,*)
  [ 51] VASP 5.3.3 (Spin Orbit Coupling, SOC)                         (p,*)
  [ 52] VASP 5.3.3 (SOC + NCS)                                        (p,*)
  [ 53] VASP 5.3.3 (SOC + Wannier90, W90)                             (p,*)
  [ 54] VASP 5.3.3 (SOC + W90 + Z2Pack)                               (p,*)
  [ 55] VASP 5.3.3 (Transition State Tools)                           (p,*)
  [ 56] VASP 5.3.5                                                    (p,*)
  [ 57] VASP 5.3.5 (Poisson's Ratio)                                  (p,*)
  [ 58] VASP 5.3.5 (Spin Orbit Coupling, SOC)                         (p,*)
  [ 59] VASP 5.3.5 (SOC + MAGMOM)                                     (p,*)
  [ 60] VASP 5.3.5 (SSAdNDP)                                          (p,*)
  [ 61] VASP 5.4.4 (Standard)                                         (p,*)
  [ 62] VASP 5.4.4 (Gamma Point)                                      (p,*)
  [ 63] VASP 5.4.4 (Non-Collinear Spin)                               (p,*)
  [ 64] VASP 6.4.1 (Standard)                                         (p,*)
  [ 65] VASP 6.4.1 (Gamma Point)                                      (p,*)
  [ 66] VASP 6.4.1 (Non-Collinear Spin)                               (p,*)
  [ 67] VASP 6.4.1 (Non-Collinear Spin; Dr. Pandey; N/2)              (p,*)
  [ 68] Custom MPICH (toolchain/intel/2018b; Dr. Ra, ME-EM)           (p)
  [ 69] Custom MPICH (toolchain/intel/2018b)                          (p)
  [ 70] Custom MPICH (toolchain/intel/2022a; Dr. Yang, ME-EM)         (s)
  [ 71] Custom Python (lang/Python/3.6.6-intel-2018b; Dr. Ghosh)      (s)

    [q] Quit and try again later
 -----------------------------------------------------------------------------

    [s] Serial application
    [p] Parallel application
    [g] GPU-enabled application
    [*] Application licensed by specific research group(s)

# choose 69
      Select an option : 69
 -----------------------------------------------
                           ---- Processors ----
    Queue           Load    Total  InUse  Free
  -----------------------------------------------
    short.q         0.39     128      64    64
    medium.q        0.77     128      99    29
    long.q          0.50     384     384     0
    chemistry.q     -NA-       0       0     0
    e58263.q        0.01     448       0   400
    epssi.q         0.00     192       0   160
    glrc.q          0.00      64       0    64
    pexue.q         0.01     448       0   400
    physics.q       1.00     192     192     0
    math.q          0.98      96      64     8
    meem1.q         0.40     160      64    96
    meem2.q         -NA-       0       0     0
    meem3.q         0.31     608     180   396
    meem4.q         0.07      64      16    48
    musti.q         0.25     192      40    88
    uscomp.q        0.83     288     224    32
  -----------------------------------------------
    Total                   3104    1327  1777
  -----------------------------------------------
                                   42.75 57.24
  -----------------------------------------------
# choose e58263.q 
  Select an appropriate queue                          : e58263.q 
  Array simulation (optional; 'y' or 'Y')              : 
  Dependent simulation ID (optional; must be a number) : 
  Notification email (optional; 'y' or 'Y')            : y
  Number of processors                                 : 32
  Full path to the folder containing the executable    : /pexue6/chuyan/test_run01
  Name of the executable                               : fvcom
  Options / Parameters for the executable (optional)   : --casename=superior


   * New 'fvcom.sh' has been created.
   * Read through but do not edit this file.
   * Verify that the correct software version / variant module is listed.
   * Submit it using the command:

       qsub fvcom.sh
  1. Go to the path you want to run the job
cd /pexue6/chuyan/test_run01

qsub fvcom.sh

Then the job is submitted.

  1. qstat can be used to see the job status
[czhao4@compute-0-201 output]$ qstat
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
  91887 0.50500 QLOGIN     czhao4       r     07/27/2023 16:44:21 qlogin.q@compute-0-201.local       1        
  91915 0.52025 fvcom.sh   czhao4       r     07/27/2023 17:56:43 e58263.q@compute-0-108.local      32        

Sometimes, the job can not be submitted successfully, just try one more time. If still doesn’t work, check the cpu node status, by using

qnodes-map e58263.q

You can specify the computing node in fvcom.sh, like
(remember 102 and 109 are permanently broken)

#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -q e58263.q@c-0-107 -q e58263.q@c-0-110 -q e58263.q@c-0-111 -q e58263.q@c-0-112
#$ -pe mpichg 64
# Not an array simulation
  1. Output file fvcom.sh.o91915 shows the running conditions. tail can be used to print the file.
tail -120 fvcom.sh.o91915 # shows the first 120 line
tail -f fvcom.sh.o91915 # shows dynamically
  1. qdel can be used to stop the job
[czhao4@compute-0-201 test_run01]$ qdel 91915
czhao4 has registered the job 91915 for deletion

  1. If run FVCOM with coldstart, the superior_run.nml setting shows below:
&NML_STARTUP
 STARTUP_TYPE      = 'coldstart',
 STARTUP_FILE      = 'superior_restart_0001.nc',
 STARTUP_UV_TYPE   = 'default',
 STARTUP_TURB_TYPE = 'default',
 STARTUP_TS_TYPE   = 'constant',
 STARTUP_T_VALS    = 4.0,
 STARTUP_S_VALS    = 0.0,
 STARTUP_DMAX      = -10.0   ! a reference level for

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值