Skip to content

Quickstart Guide: FMS doubly periodic HIRAM

Summary

This model employs the finite-volume dynamical core on a 180 by 180 square doubly periodic grid with 32 vertical levels.
The grid cell size is 50KM. Insolation is set at equatorial value, the surface is flat and wet (aqua model).

References:


Table of Contents


1. Acquire the Source Code and Runscripts

A zipped tar ball containing the code and scripts can be downloaded here. This package contains code, scripts and a few small tools.

2. Acquire the Input Datasets

You may download the input data here. This file must first be unzipped using gunzip. Note that the resulting tar file is 5GB in size. Extract the files into a location where you have sufficient free space.

3. Run the Model

3.1. Functionality of the Sample Runscripts

This release includes a compile script and a run script for the doubly periodic HIRAM model in the exp directory. The compile script:

  • generates the mppnccombine executable, which combines individual atmospheric restart files from multiprocessor output into one netcdf file.
  • generates the landnccombine executable, which combines individual land restart files from multiprocessor output into one netcdf file.
  • compiles and links the model source code.

The run script:

  • creates a working directory where the model will be run.
  • creates or copies the required input data into the working directory.
  • runs the model.
  • combines distributed ouput and renames the output files using the timestamp.

Note that the directory paths and file paths are variables. They are initially set to correspond to the directory structure as it exists after extraction from the tar file, but are made variables to accommodate changes to this directory structure.

The directory path most likely to need changing is workdir. workdir is a temporary directory where the model will run. A large amount of data will be copied into the work directory, and output from the model is also written to the work directory. workdir must be large enough to accommodate all of this. The input data is approximately 20GB and model output is potentially even larger.

3.2. Portability Issues with the Sample Runscripts

If you encounter a compile error when executing the compile script, first check whether you have correctly customized your mkmf template. The scripts use the mkmf utility, which creates a Makefile to facilitate compilation. The mkmf utility uses a platform-specific template for setting up system and platform dependent parameters. Sample templates for various platforms are provided in the bin directory. You may need to consult your system administrator to set up a compilation template for your platform and ensure the locations for system libraries are defined correctly. For a complete description of mkmf see the mkmf documentation.

3.3. Specifying the model resolution

All grid parameters are controlled by setting namelist variables in fv_core_nml. The doubly periodic grid is specified by setting grid_type to 4.

Any change to this would necessitate major changes to the rest of the model and is not recommended.

The horizontal grid size is specified by setting npx and npy. npx and npy are the number of grid cells in the X any Y directions respectively, plus one. e.g., 180 grid cells in each direction would be npx=181, npy=181.

For a doubly periodic grid the grid cells are rectangular or square. The size of the grid cells is set with dx_const and dy_const, units of meters.

3.4. layout and io_layout

NOTE: The description of layout and io_layout here is similar, but not exactly the same, as that included in the documentation of other GFDL models. It is modified to reflect the realities of the doubly periodic grid.

3.4.1. layout

The horizontal grid of each component model is partitioned among processors according to the setting of the namelist variable “layout”. If layout is not specified the model will determine it according to a default scheme.

Consider a horizontal grid with 30 cells in one direction and 16 in the other. (This does not have to be rectangular longitude by latitude grid, model grids can be considered logically rectangular.) If run on 24 processors, one could set layout=6,4 The grid would be partitioned among processors as shown below. (Each asterisk represents a grid cell)

 +---------+---------+---------+---------+---------+---------+
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |  pe=18  |  pe=19  |  pe=20  |  pe=21  |  pe=22  |  pe=23  |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 +---------+---------+---------+---------+---------+---------+
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |  pe=12  |  pe=13  |  pe=14  |  pe=15  |  pe=16  |  pe=17  |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 +---------+---------+---------+---------+---------+---------+
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |  pe=6   |  pe=7   |  pe=8   |  pe=9   |  pe=10  |  pe=11  |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 +---------+---------+---------+---------+---------+---------+
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |  pe=0   |  pe=1   |  pe=2   |  pe=3   |  pe=4   |  pe=5   |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 |         |         |         |         |         |         |
 |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|
 +---------+---------+---------+---------+---------+---------+

Choice of layout has no effect on the model’s solution, but on some platforms the code’s performance can be affected.

3.4.2. io_layout

The I/O efficiency of high resolution models run on a large number of processors can be significantly impacted by the number files written. For this reason, this code allows control over the distributed output of the diagnostic and restart files.

io_layout is a namelist variable that controls the partitioning of multiple processor output among files. Each component model has its own io_layout.

Consider a model running on 24 processors with layout=6,4 as shown above. If io_o_layout were set to 3,2 the processor output would be aggregated into files as shown below.

 +-------------------+-------------------+-------------------+
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |  pe=18     pe=19  |  pe=20     pe=21  |  pe=22     pe=23  |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |      file 3       |      file 4       |      file 5       |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |  pe=12     pe=13  |  pe=14     pe=15  |  pe=16     pe=17  |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 +-------------------+-------------------+-------------------+
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |  pe=6      pe=7   |  pe=8      pe=9   |  pe=10     pe=11  |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |      file 0       |      file 1       |      file 2       |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 |  pe=0      pe=1   |  pe=2      pe=3   |  pe=4      pe=5   |
 |                   |                   |                   |
 |                   |                   |                   |
 |                   |                   |                   |
 +-------------------+-------------------+-------------------+

Output from a single processor cannot be divided between files. This means that io_layout must be chosen such that layout(1) and layout(2) are multiples of io_layout(1) and io_layout(2).

The output file names consist of a base name with a four digit file number appended to it. For the example above the restart files for the ice model would be:

ice_model.res.nc.0000
ice_model.res.nc.0001
ice_model.res.nc.0002
ice_model.res.nc.0003
ice_model.res.nc.0004
ice_model.res.nc.0005

Tools are provided to combine these distributed files into files of data on a single grid. The processor numbers are removed from the file names after combining. Combining of restart files is done in the run script provided, but is not required. The code has the capability of restarting with either combined or distributed restart files.

If io_layout is set to 1,1 then all processors write to a single file and the file number suffix does not appear in the file name.

3.5. Restarting and cold-starting

3.5.1. restarting

Restart files are written to a sub-directory, named RESTART, off the working directory. Information about the state of the model at the point of termination is contained in these files. Each component model and/or sub-component may have restart files. To continue a previous integration these files are put in the INPUT directory. They are read at initialization to restore the state of the model as it was at termination of the previous integration.

3.5.2. cold-starting

If restart files do not exist in the INPUT directory then it performs a default initialization is performed, also referred to as a cold-start.

The model fills fields with constant values for a cold-start. The result is a model state that is very flat and far away from anything scientifically interesting. As a result, a cold-started model needs to be spun-up. The spin-up time varies with model and the user’s purpose.

3.6. Time and calendar

Control of model time and calendar is a common source of confusion. Only a couple facts need to be understood to avoid most of this confusion. The first is how the model time and calendar are set.

When coupler.res does not exist:
current_date and calendar are as specified in coupler_nml and the namelist setting of force_date_from_namelist is ignored.

When coupler.res does exist and force_date_from_namelist=.true.:
current_date and calendar are as specified in coupler_nml.

When coupler.res does exist and force_date_from_namelist=.false.:
current_date and calendar are read from coupler.res and the namelist settings of current_date and calendar are ignored.

The second is the date which appears at the top of the diag table. This is the model initial time. It is used for two purposes.

  • It is used to define a time axis for netcdf model output, the time values are since the initial time.
  • It is also used in the time interpolation of certain input data. Because of this, It is recommended that it always be equal to the date that was used for current_date (in coupler_nml) in the initial run of the model and that it not change thereafter. That is, do not change it when restarting the model.

3.7. diag_table

The diagnostic output is controlled via the diagnostics table, which is named “diag_table”.

Documentation on the use of diag_table comes with the release package. After extraction, it can be found here: src/shared/diag_manager/diag_table.html

3.8. data_table

The data table includes information about external files that will be read by the data_override code to fill fields of specified data.

Documentation on the use of data_table comes with the release package. After extraction, it can be found here: src/shared/data_override/data_override.html

3.9. field_table

Aside from the model’s required prognostic variables; velocity, pressure, temperature and humidity, the model may or may not have any number of additional prognostic variables. All of them are advected by the dynamical code, sources and sinks are handled by the physics code. These optional fields, referred to as tracers, are specified in field_table. For each tracer, the method of advection, convection, source and sink that are to be applied to the tracer is specified in the table. In essence the field_table is a powerful type of namelist.

A more thorough description of field_table comes with the release package. After extraction, it can be found here: src/shared/field_manager/field_manager.html

3.10. Changing the Sample Runscripts

3.10.1. Changing the length of the run and atmospheric time step

By default the scripts are set up to run only one day. The run length is controlled by the namelist coupler_nml The variables months and days set the run length.

3.10.2. Changing the number of processors

By default the scripts are set up to run with thchange_time_stepe MPI library, on 288 processors. To change the number of processors, change the $npes variable at the top of the sample runscript. The processor count must be consistent with the model layouts.

To run on one processor without the MPI library, do the following:

  • Set the variable $npes to 1 at the top of the sample runscript.
  • Change the run command in the runscript from “mpirun -np $npes $model_executable:t” to simply “./$model_executable:t”
  • Remove the -Duse_libMPI from the mkmf line in the compile script.
  • Remove the -lmpi from the $LIBS variable in your mkmf template.
  • Move or remove your previous compilation directory (specified as $execdir in the runscript) so that all code must be recompiled.

4. Examine the Output

You may download sample output data for comparison here.

Note that the sample output is 6.6 GB in size. Extract the files into a location where you have sufficient free space. This output was generated on the gaea system at Oak Ridge National Lab (ORNL). The file output.tar.gz contains three directories: ascii, history and restart. The ascii directory contains text output of the model including stdout and log messages. The history directory contains netCDF diagnostic output, governed by your entries in the diag_table. History and ascii files are labeled with the timestamp corresponding to the model time at the beginning of execution. The restart directory contains files which describe the state of the model at termination. To restart the model running from this state, these restart files are moved to the INPUT directory to serve as the initial conditions for your next run.