Quickstart Guide: FMS high resolution atmospheric model (HIRAM)
The High Resolution Atmospheric Model uses the finite-volume dynamical core on
a cubed sphere grid with each face consisting of a 180 by 180 grid. There are 32
levels in the vertical. The model’s physics parameterizations are similar to GFDL’s
AM2 model, with modifications deemed appropriate for the increased resolution.
Zhao, Ming, Isaac M Held, Shian-Jiann Lin, and Gabriel A Vecchi, December 2009:
of global hurricane climatology, interannual variability, and response to global
warming using a 50km resolution GCM. Journal of Climate,
Table of Contents
- 1. Acquire the Source Code and Runscripts
- 2. Acquire the Input Datasets
- 3. Run the Model
- 3.1. Functionality of the Sample Runscripts
- 3.2. Portability Issues with the Sample Runscripts
- 3.3. layout and io_layout
- 3.4. Restarting and cold-starting
- 3.5. Time and calendar
- 3.6. diag_table
- 3.7. data_table
- 3.8. field_table
- 3.9. Changing the Sample Runscripts
- 4. Examine the Output
You may download the release package
You may download the input data
This file is large, 19GB after unzipping. Extract the files into a location where
you have sufficient free space.
This release includes a compile script and a run script for the HIRAM model in
the exp directory. The compile script:
generates the mppnccombine executable, which combines individual atmospheric
restart files from multiprocessor output into one netcdf file.
generates the landnccombine executable, which combines individual land restart
files from multiprocessor output into one netcdf file.
- compiles and links the model source code.
The run script:
- creates a working directory where the model will be run.
- creates or copies the required input data into the working directory.
- runs the model.
- combines distributed ouput and renames the output files using the timestamp.
Note that the directory paths and file paths are variables. They are initially
set to correspond to the directory structure as it exists after extraction from
the tar file, but are made variables to accommodate changes to this directory structure.
The directory path most likely to need changing is workdir. workdir is a temporary
directory where the model will run. A large amount of data will be copied into the
work directory, and output from the model is also written to the work directory.
workdir must be large enough to accommodate all of this. The input data is approximately
20GB and model output is potentially even larger.
If you encounter a compile error when executing the compile script, first check
whether you have correctly customized your mkmf
template. The scripts use the mkmf utility,
which creates a Makefile to facilitate compilation. The
mkmf utility uses a platform-specific template for
setting up system and platform dependent parameters. Sample templates for various
platforms are provided in the bin directory. You may need
to consult your system administrator to set up a compilation template for your platform
and ensure the locations for system libraries are defined correctly. For a complete
description of mkmf see the
The horizontal grid of each component model is partitioned among processors according
to the setting of the namelist variable “layout”. Each model has a layout variable
in its namelist.
Consider a horizontal grid with 30 cells in one direction and 16 in the other.
(This does not have to be rectangular longitude by latitude grid, model grids can
be considered logically rectangular.) If run on 24 processors, one could set layout=6,4
The grid would be partitioned among processors as shown below. (Each asterisk represents
a grid cell)
+---------+---------+---------+---------+---------+---------+ |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | pe=18 | pe=19 | pe=20 | pe=21 | pe=22 | pe=23 | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| +---------+---------+---------+---------+---------+---------+ |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | pe=12 | pe=13 | pe=14 | pe=15 | pe=16 | pe=17 | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| +---------+---------+---------+---------+---------+---------+ |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | pe=6 | pe=7 | pe=8 | pe=9 | pe=10 | pe=11 | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| +---------+---------+---------+---------+---------+---------+ |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | pe=0 | pe=1 | pe=2 | pe=3 | pe=4 | pe=5 | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| | | | | | | | |* * * * *|* * * * *|* * * * *|* * * * *|* * * * *|* * * * *| +---------+---------+---------+---------+---------+---------+
The cubed_sphere atmosphere and land models treat each face of the cubed sphere
grid as a separate grid in itself. The example above would then represent one face
of the global grid. The total number of cells globally would be 30*16*6=2880. The
total number of processors used would be 6*24=144
The cubed_sphere atmosphere has one restriction: It requires at least 4 grid
cells in each direction, a minimum of 16 grid cells per processor.
Choice of layout has no effect on the model’s solution, but on some platforms
the code’s performance can be affected.
The I/O efficiency of high resolution models run on a large number of processors
can be significantly impacted by the number files written. For this reason, this
code allows control over the distributed output of the diagnostic and restart files.
io_layout is a namelist variable that controls the partitioning of multiple processor
output among files. Each component model has its own io_layout.
Consider a model running on 24 processors with layout=6,4 as shown above. If
io_o_layout were set to 3,2 the processor output would be aggregated into files
as shown below.
+-------------------+-------------------+-------------------+ | | | | | | | | | | | | | pe=18 pe=19 | pe=20 pe=21 | pe=22 pe=23 | | | | | | | | | | | | | | file 3 | file 4 | file 5 | | | | | | | | | | | | | | pe=12 pe=13 | pe=14 pe=15 | pe=16 pe=17 | | | | | | | | | | | | | +-------------------+-------------------+-------------------+ | | | | | | | | | | | | | pe=6 pe=7 | pe=8 pe=9 | pe=10 pe=11 | | | | | | | | | | | | | | file 0 | file 1 | file 2 | | | | | | | | | | | | | | pe=0 pe=1 | pe=2 pe=3 | pe=4 pe=5 | | | | | | | | | | | | | +-------------------+-------------------+-------------------+
Output from a single processor cannot be divided between files. This means that
io_layout must be chosen such that layout(1) and layout(2) are multiples of io_layout(1)
The output file names consist of a base name with a four digit file number appended
to it. For the example above the restart files for the ice model would be:
As stated above, the cubed sphere atmosphere and land models treat each face
of the global cubed sphere grid as a separate grid. From these models, one would
see output files names with both the tile and processor numbers as part of the name.
etc…… 30 files in all
Tools are provided to combine these distributed files into files of data on a
single grid. The processor numbers are removed from the file names after combining.
Combining of restart files is done in the run script provided, but is not required.
The code has the capability of restarting with either combined or distributed restart
If io_layout is set to 1,1 then all processors write to a single file and the
file number suffix does not appear in the file name.
Restart files are written to a sub-directory, named RESTART, off the working
directory. Information about the state of the model at the point of termination
is contained in these files. Each component model and/or sub-component may have
restart files. To continue a previous integration these files are put in the INPUT
directory. They are read at initialization to restore the state of the model as
it was at termination of the previous integration.
If a component and/or sub-component does not find its restart files in the INPUT
directory then it performs a default initialization, also referred to as a cold-start.
The default initialization of each component is required to be compatible with other
model components, but otherwise is entirely at the discretion of the developer(s).
The atmospheric and land models typically fill the model fields with constant
values for a cold-start. The result is a model state that is very flat and far away
from anything scientifically interesting. As a result, a cold-started model needs
to be spun-up. The spin-up time can be very long. For the land model it can be on
the order of a century or more. For this reason it is recommended that the user
cold-start only the atmospheric component, by omitting only atmospheric restart
files and including all other restart files.
A few changes to the namelist settings are needed for a cold start. For the intrepid,
a namelist file with these changes is included.
Control of model time and calendar is a common source of confusion. Only a couple
facts need to be understood to avoid most of this confusion. The first is how the
model time and calendar are set.
When coupler.res does not exist:
current_date and calendar are as specified in coupler_nml and the namelist setting
of force_date_from_namelist is ignored.
When coupler.res does exist and force_date_from_namelist=.true.:
current_date and calendar are as specified in coupler_nml.
When coupler.res does exist and force_date_from_namelist=.false.:
current_date and calendar are read from coupler.res and the namelist settings of
current_date and calendar are ignored.
The second is the date which appears at the top of the diag table. This is the
model initial time. It is used for two purposes.
It is used to define a time axis for netcdf model output, the time values
are since the initial time.
It is also used in the time interpolation of certain input data. Because
of this, It is recommended that it always be equal to the date that was used
for current_date (in coupler_nml) in the initial run of the model and that it
not change thereafter. That is, do not change it when restarting the model.
The diagnostic output is controlled via the diagnostics table, which is named
Documentation on the use of diag_table comes with the release package. After
extraction, it can be found here: src/shared/diag_manager/diag_table.html
The data table includes information about external files that will be read by
the data_override code to fill fields of specified data.
Documentation on the use of data_table comes with the release package. After
extraction, it can be found here: src/shared/data_override/data_override.html
Aside from the model’s required prognostic variables; velocity, pressure, temperature
and humidity, the model may or may not have any number of additional prognostic
variables. All of them are advected by the dynamical code, sources and sinks are
handled by the physics code. These optional fields, referred to as tracers, are
specified in field_table. For each tracer, the method of advection, convection,
source and sink that are to be applied to the tracer is specified in the table.
In essence the field_table is a powerful type of namelist.
A more thorough description of field_table comes with the release package. After
extraction, it can be found here: src/shared/field_manager/field_manager.html
By default the scripts are set up to run only one day. The run length is controlled
by the namelist coupler_nml The variables months and days
set the run length.
By default the scripts are set up to run with the MPI library, on 1080 processors.
To change the number of processors, change the $npes variable at the top of the
sample runscript. The processor count must be consistent with the model layouts.
To run on one processor without the MPI library, do the following:
- Set the variable $npes to 1 at the top of the sample runscript.
Change the run command in the runscript from “mpirun -np $npes $model_executable:t”
to simply “./$model_executable:t”
- Remove the -Duse_libMPI from the mkmf line in the compile script.
- Remove the -lmpi from the $LIBS variable in your mkmf template.
Move or remove your previous compilation directory (specified as $execdir
in the runscript) so that all code must be recompiled.
You may download sample output data for comparison
Note that the sample output is 6.6 GB in size. Extract the files into a location
where you have sufficient free space. This output was generated on the gaea system
at Oak Ridge National Lab (ORNL). The file output.tar.gz contains three directories:
ascii, history and restart. The ascii directory contains text output of the model
including stdout and log messages. The history directory contains netCDF diagnostic
output, governed by your entries in the diag_table. History and ascii files are
labeled with the timestamp corresponding to the model time at the beginning of execution.
The restart directory contains files which describe the state of the model at termination.
To restart the model running from this state, these restart files are moved to the
INPUT directory to serve as the initial conditions for your next run.