Ubuntu 22.04 install WRF(Weather Research & Forecasting Model)

Real Application

WRF Installation:

1. Building Libraries
  • Before getting started, you need to make another directory. Go inside your Build_WRF directory:

    cd Build_WRF
    

    and then make a directory called “LIBRARIES”

    mkdir LIBRARIES
    
  • Depending on the type of run you wish to make, there are various libraries that should be installed. Below are 5 libraries. Download all 5 tar files and place them in the LIBRARIES directory.

    mpich-3.0.4

    netcdf-c-4.7.2

    netcdf-fortran-4.5.2

    Jasper-1.900.1

    libpng-1.2.50

    zlib-1.2.11

  • It is important to note that these libraries must all be installed with the same compilers as will be used to install WRF and WPS.

  • Before installing the libraries, these paths need to be set:

    export DIR=/home/ubutnu/Application/WRF/LIBRARIES
    export CC=gcc
    export CXX=g++
    export FC=gfortran
    export FCFLAGS=-m64
    export F77=gfortran
    export FFLAGS=-m64
    export JASPERLIB=$DIR/grib2/lib
    export JASPERINC=$DIR/grib2/include
    export LDFLAGS=-L$DIR/grib2/lib
    export CPPFLAGS=-I$DIR/grib2/include
    
  • NetCDF: This library is always necessary! This installation consists of a netcdf-c and netcdf-fortran library.

    tar xzvf netcdf-c-4.7.2.tar.gz   #or just .tar if no .gz present
    cd netcdf-c-4.7.2
    ./configure --prefix=$DIR/netcdf --disable-dap  --disable-netcdf-4 --disable-shared
    
    make
    make install
    export PATH=$DIR/netcdf/bin:$PATH
    export NETCDF=$DIR/netcdf
    cd ..
    
    export LIBS="-lnetcdf -lz"
    tar xzvf netcdf-fortran-4.5.2.tar.gz  
    cd netcdf-fortran-4.5.2
    ./configure --prefix=$DIR/netcdf --disable-dap --disable-netcdf-4 --disable-shared
    make
    make install
    export PATH=$DIR/netcdf/bin:$PATH
    export NETCDF=$DIR/netcdf
    cd ..
    
  • MPICH: This library is necessary if you are planning to build WRF in parallel. If your machine does not have more than 1 processor, or if you have no need to run WRF with multiple processors, you can skip installing MPICH.

    In principle, any implementation of the MPI-2 standard should work with WRF;

    Assuming all the ‘export’ commands were already issued while setting up NetCDF, you can continue on to install MPICH, issuing each of the following commands:

    tar xzvf mpich-3.0.4.tar.gz   
    cd mpich-3.0.4
    ./configure --prefix=$DIR/mpich
    make
    make install
    export PATH $DIR/mpich/bin:$PATH
    cd ..
    
  • zlib: This is a compression library necessary for compiling WPS (specifically ungrib) with GRIB2 capability
    Assuming all the “export” commands from the NetCDF install are already set, you can move on to the commands to install.

    tar xzvf zlib-1.2.7.tar.gz   #or just .tar if no .gz present
    cd zlib-1.2.7
    ./configure --prefix=$DIR/grib2
    make
    make install
    cd ..
    
  • libpng: This is a compression library necessary for compiling WPS (specifically ungrib) with GRIB2 capability
    Assuming all the “export” commands from the NetCDF install are already set, you can move on to the commands to install zlib.

    tar xzvf libpng-1.2.50.tar.gz 
    cd libpng-1.2.50
    ./configure --prefix=$DIR/grib2
    make
    make install
    cd ..
    
  • Jasper: This is a compression library necessary for compiling WPS (specifically ungrib) with GRIB2 capability
    Assuming all the “setenv” commands from the NetCDF install are already set, you can move on to the commands to install zlib.

    tar xzvf jasper-1.900.1.tar.gz  
    cd jasper-1.900.1
    ./configure --prefix=$DIR/grib2
    make
    make install
    cd ..
    
2. Building WRF
  • After ensuring that all libraries are compatible with the compilers, you can now prepare to build WRF. You can obtain the WRF source code by following the steps beginning with there.

    Once you obtain the WRF source code (and if you downloaded a tar file, have unpacked the tar file), go into the WRF directory:

    cd WRF
    

    Create a configuration file for your computer and compiler:

    ./configure
    
  • Select the appropriate compiler and processor usage. Only choose an option for a compiler that is installed on the system.

    serial : computes with a single processor. This is only useful for small cases with domain size of about 100x100 grid spaces.

    smpar : Symmetric Multi-processing/Shared Memory Parallel (OpenMP). This option is only recommended for those who are knowledgeable with computation and processing. It works most reliably for IBM machines.

    dmpar : Distributed Memory Parallel (MPI). This is the recommended option.

    dm+sm : Distributed Memory with Shared Memory (for e.g., MPI across nodes with OpenMP within a node). Performance is typically better with the dmpar-only option, and this option is not recommended for those without extensive computation/processing experience.

  • Select the nesting option for the type of simulation desired.

    0 = no nesting

    1 = basic nesting (standard, this is the most common choice)

    2 = nesting with a prescribed set of moves

    3 = nesting that allows a domain to follow a vortex, specific to tropical cyclone tracking

  • Optional configuration options include

    ./configure -d : for debugging. This option removes optimization, which is useful when running a debugger (such as gdb or dbx).

    ./configure -D : for bounds checking and some additional exception handling, plus debugging, with optimization removed. Only PGI, Intel, and gfortran (GNU) compilers have been set up to use this option.

    ./configure -r8 : for double-precision. This only works with PGI, Intel, and gfortran (GNU) compilers.

    Recommended Options: 34,1

  • Once your configuration is complete, you should have a configure.wrf file, and you are ready to compile. To compile WRF, you will need to decide which type of case you wish to compile. The options are listed below:

    em_realreal-data simulations
    em_fire3D idealized cases
    em_b_wave
    em_convrad
    em_heldsuarez
    em_les
    em_quarter_ss
    em_tropical_cyclone
    em_grav2d_x2D idealized cases
    em_hill2d_x
    em_seabreeze2d_x
    em_squall2d_x, em_squall2d_y
    em_scm_xy1D idealized case
    ./compile case_name >& log.compile
    

    where case_name is one of the options listed above

  • Once the compilation completes, to check whether it was successful, you need to look for executables in the WRF/main directory:

    ls -ls main/*.exe
    

    If you compiled a real case, you should see:

    wrf.exe (model executable)
    real.exe (real data initialization)
    ndown.exe (one-way nesting)
    tc.exe (for tc bogusing--serial only)
    

    If you compiled an idealized case, you should see:

    wrf.exe (model executable)
    ideal.exe (ideal case initialization)
    

    These executables are linked to 2 different directories:

    WRF/run
    WRF/test/em_real
    

    You can choose to run WRF from either directory.

3. Building WPS
  • After the WRF model is built, the next step is building the WPS program (if you plan to run real cases, as opposed to idealized cases). The WRF model MUST be properly built prior to trying to build the WPS programs.You can obtain the WPS code by following the same directions for obtaining WRF.

  • Go into the WPS directory:

    cd WPS
    
  • Similar to the WRF model, make sure the WPS directory is clean, by issuing:

    ./clean
    
  • The next step is to configure WPS, however, you first need to set some paths for the ungrib libraries:

    export JASPERLIB=$DIR/grib2/lib
    export JASPERINC=$DIR/grib2/include
    
  • and then you can configure:

    ./configure
    

    Choose the option that lists a compiler to match what you used to compile WRF, serial, and grib2. *Note: The option number will likely be different than the number you chose to compile WRF

  • the metgrid.exe and geogrid.exe programs rely on the WRF model’s I/O libraries. There is a line in the configure.wps file that directs the WPS build system to the location of the I/O libraries from the WRF model:

    Above is the default setting. As long as the name of the WRF model’s top-level directory is “WRF” and the WPS and WRF directories are at the same level (which they should be if you have followed exactly as instructed on this page so far), then the existing default setting is correct and there is no need to change it. If it is not correct, you must modify the configure file and then save the changes before compiling.

  • now compile WPS:

    ./compile >& log.compile
    
  • If the compilation is successful, there should be 3 executables in the WPS top-level directory, that are linked to their corresponding src/ directories:

    geogrid.exe -> geogrid/src/geogrid.exe
    ungrib.exe -> ungrib/src/ungrib.exe
    metgrid.exe -> metgrid/src/metgrid.exe
    
4. Static Geography Data
  • The WRF modeling system is able to create idealized simulations, though most users are interested in the real-data cases. To initiate a real-data case, the domain’s physical location on the globe and the static information for that location must be created. This requires a data set that includes such fields as topography and land use catergories. These data are available from the WRF download page http://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html.

  • Download the file and place it in the Build_WRF directory. Keep in mind that if you are downloading the complete dataset, the file is very large. If you are sharing space on a cluster, you may want to consider placing this in a central location so that everyone can use it, and it’s not necessary to download for each person. Uncompress and un-tar the file:

    gunzip *.tar.gz
    tar -xf *.tar
    
  • When you untar the file, it will be called “geog.” Rename the file to “WPS_GEOG.”

    mv geog WPS_GEOG
    
  • The directory infomation is given to the geogrid program in the namelist.wps file in the &geogrid section:

    geog_data_path = ´path_to_directory/Build_WRF/WPS_GEOG´
    
  • The data expands to approximately 10 GB. This data allows a user to run the geogrid.exe program.

5. Real-time Data
  • For real-data cases, the WRF model requires up-to-date meteorological information for both an initial condition and also for lateral boundary conditions. This meteorological data is traditionally a Grib file that is provided by a previously run external model or analysis. For a semi-operational set-up, the meteorological data is usually sourced from a global model, which permits locating the WRF model’s domains anywhere on the globe.

  • The National Centers for Environmental Prediction (NCEP) run the Global Forecast System (GFS) model four times daily (initializations valid for 0000, 0600, 1200, and 1800 UTC). This is a global, isobaric, 0.5 degree latitude/longitude, forecast data set that is freely available, and is usually accessible +4h after the initialization time period.

  • A single data file needs to be acquired for each requested time period. For example, if we would like hours 0, 6, and 12 of a forecast that is initialized 2019 July 12 at 0000 UTC, we need the following times:

    2019071200 – 0 h
    2019071206 – 6 h
    2019071212 – 12 h

    These translate to the following file names to access:

    gfs.2019071200/gfs.t00z.pgrb2.0p50.f000
    gfs.2019071200/gfs.t00z.pgrb2.0p50.f006
    gfs.2019071200/gfs.t00z.pgrb2.0p50.f012

    Note that the initialization data and time (gfs.2019071200) remains the same, and that the forecast cycle remains the same (t00z). What is incremented is the forecast hour (f00, f06, f12).

  • Before obtaining the data, creat a directory in Build_WRF, called “DATA”, and then go into that directory:

    mkdir DATA
    cd DATA
    

    Download link: https://ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/

Running WRF

1. Running Idealized Cases

To run an idealized simulation, the model must have been compiled for the idealized test case of choice, with either a serial compiling option (mandatory for the 1-D and 2-D test cases, or with a parallel computing option (e.g., dmpar, allowed for 3-D test cases). See the following instructions for either a 2-D idealized case, or a 3-D idealized case.

  • Move to the case running directory.

    cd WRF/test/em_b_wave
    
  • Edit the namelist.input file to set integration length, output frequency, domain size, timestep, physics options, and other parameters (see ‘README.namelist’ in the WRF/run directory, or namelist options), and then save the file.

  • Run the ideal initialization program.

    • For a serial build:
    ./ideal.exe >& ideal.log
    
    • For a parallel build:
    mpirun -np 1 ./ideal.exe
    

    Note

    ideal.exe must be run with only a single processor (denoted by “-np 1”), even if the code is built for parallel computing.

    This program typically reads an input sounding file provided in the case directory, and generates an initial condition file ‘wrfinput_d01.’ Idealized cases do not require a lateral boundary file because boundary conditions are handled in the code via namelist options. If the job is successful, the bottom of the “ideal.log” file (or rsl.out.0000 file for parallel execution) should read SUCCESS COMPLETE IDEAL INIT.

  • Then to run the WRF model, type

    • For a serial build:
    ./wrf.exe >& wrf.log
    
    • For a parallel build (where here we are asking for 8 processors):
     mpirun -np 8 ./wrf.exe
    
2. Running Real-data Cases
  • To run the model for a real-data case, move to the working directory by issuing the command

    > cd WRF/test/em_real
    or
    > cd WRF/run
    
  • Prior to running a real-data case, the WRF Preprocessing System (WPS) must have been successfully run, producing “met_em.” files. Link the met_em files to the WRF running directory.

     ln -sf ../../../WPS/met_em*
    
  • Start with the default namelist.input file in the directory and edit it for your case.

    • Make sure the parameters in the time_control and &domains sections are set specific to your case
    • Make sure dates and dimensions of the domain match those set in WPS. If only one domain is used, only entries in the first column are read and other columns are ignored. No need to remove additional columns.
  • Run the real-data initialization program

    • For WRF built for serial computing, or OpenMP - smpar
    ./real.exe >& real.log
    
    • For WRF built for parallel computing - dmpar - an example requesting to run with four processors
    mpiexec -np 4 ./real.exe
    
  • Run the WRF model

    • For WRF built for serial computing, or OpenMP - smpar
    ./wrf.exe >& real.log
    
    • For WRF built for parallel computing - dmpar - an example requesting to run with four processors
    mpiexec -np 4 ./wrf.exe
    

WRF & WPS Common Mistakes:

  • netcdf-fortran compilation error:

    The gfortran version should be no higher than 7 as some functions have been modified in gfortran version above 7, and versions of gcc and g++ should be consistent with the gfortran version.

  • The WRF compilation error:

    You must run the 'configure' script before running the 'compile' script!
    Exiting..
    

    Solution:

    export NETCDF_classic=1
    
  • The WPS compilation error:

    There is no metgrid.exe file

    Solution:

    must strictly follow the order:

    ./clean
    ./configure
    3 #Corresponds to the option of WRF
    export NETCDF=$DIR/netcdf
    ./compile
    export NETCDF=$DIR/netcdf/bin
    ./compile
    
  • Error while running
    Parsed 28 entries in GEOGRID.TBL
    Processing domain 1 of 1
    ERROR: Could not open /home/xiaomo/Build_WRF/WPS_GEOG/orogwd_10m/con/index
    application called MPI_Abort(MPI_COMM_WORLD, 22077) - process 0

    Solution
    Enter https://www2.mmm.ucar.edu/wrf/src/wps_files/ to dwonload the corresponding file.

  • Error while running ungrid.exe

    ./ungrib.exe: error while loading shared libraries: libpng16.so.16: cannot open shared object file: No such file or directory
    

    Solution:

    sudo vim /etc/ld.so.conf
    

    add sentence: sudo vim /etc/ld.so.conf

    sudo ldconfig
    
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值