next up previous contents
Next: The MPP modules Up: The FMS Manual: A Previous: Coding for performance   Contents


Interfaces for component models

Component models are defined as the model code for each climate subsystem, specifically: atmosphere, ocean, land surface, sea ice, etc. They are divided into a core which treats resolved-scale dynamics (referred to as the dynamical core) and a set of parameterized representations of unresolved phenomena (referred to as the physics routines).

This chapter specifies standards for component models. The standards are divided into a section specifying general standards for representing physical information, a section specifying standards for writing column physics (mainly relevant for atmospheric models); the interface specification for how component models will call column physics; and finally sections specifying the interface between specific component models and the coupled model driver.

General representation of physical information

Horizontal grid

The state of a component model is represented at any instant on a logically rectilinear grid. The horizontal coordinates are permitted to be physically curvilinear. For coupled model experiments, the coupler places restrictions on the span4.1 of coupled model elements:

  1. The span of the atmos component is defined as the model's global domain.
  2. The span of the planetary surface (the union of land and ice4.2 components) must equal the atmospheric span. In addition there must be no holes and no overlaps between the components of the planetary surface.
  3. The ocean component must exactly underlie the ice component.
  4. If any span is incomplete, it must patch in data for the uncovered region. The coupler assumes that each horizontal grid it sees supplies all the data required for coupling.

There are no restrictions on the span of solo component models.

Vertical grid

The vertical grid specification of a component model is generally internal to that model. Its specification will be described separately in the specification of component model cores.

Boundary state vector

Each component model has a boundary state vector that contains all information about its instantaneous internal state that may be required by the coupler. The requirements of the interface state vector that each component model must provide are specified in FMS:Coupler.


  1. Explicit or implicit timestepping may be used by any component model. This places certain restrictions on the treatment of fluxes between component models, described below in FMS:Coupler.
  2. The coupling timestep must be an integral multiple of each of the component model timesteps;
  3. The order of calls to component models in the main program may depend on which of the component models has the shortest timestep.


FMS uses SI units.

Column Physics

Historically, the first attempt to write down a formal FMS design specification began with the 1D column physics specification. While that document continues to be separately available, many of its recommendations have now become part of the general programming standards for FMS (FMS:ProgrammingStandards). Considerations specific to the column physics routines are outlined here.

Definition of column physics

Various unresolved and other phenomena are represented in models through column physics routines, which operate entirely on vertical columns of data. The column orientation is a distinguishing feature in that the model domain decomposition on scalable architectures has a horizontal orientation (FMS:Parallelism).

Horizontal grid

Column physics routines in FMS are constructed to operate with no knowledge of the parallel decomposition of the model. The main computational routine must be thread-safe (ThreadSafety).

  1. All model fields modified by a column physics routine share the same 2D horizontal grid.
  2. The standard latitude (-$\pi$/2, $\pi$/2) and standard longitude (0, 2$\pi$) location of each column is passed in arguments which are arrays on this grid.
  3. There are no horizontal data dependencies (in the limit, the routine must be able to operate on a single vertical column).
  4. The horizontal grid indices (i,j) must be the two innermost indices on any model field.
  5. If lateral boundary conditions are required by the column computations, they, and the locations of the grid faces, are passed in arrays with one extra (i,j) grid point in each horizontal direction. There are no current cases requiring lateral boundary conditions.

Vertical grid

  1. The vertical grid index k must be the 3rd index of any model field of rank 3 or higher. Other indices (e.g tracer number, timestep) must follow k.
  2. Vertical positions are numbered from top to bottom, with points nearest the ground having the highest value of k.
  3. Vertical grid locations are passed as arrays on the column physics grid. Vertical grid faces, if required, are passed as arrays with one extra k location. An optional mask array can be passed to identify non-existent grid locations (e.g locations below the topography in a non-terrain-following atmospheric vertical co-ordinate).

Shared physical quantities

Some physical quantities (e.g moisture) are shared between multiple parameterizations.

  1. Shared physical quantities are passed in and out as optional arguments to avoid redundant computations.
  2. All quantities are in SI units, as in all of FMS. Dimensionless tracer concentrations, by convention, are in ``specific humidity''-like notation, not ``mixing ratio''-like.

Procedural interfaces

There are three basic types of public procedural interfaces, the constructor, destructor, and main computational routine.

  1. The constructor and destructor perform all the functions outlined earlier in FMS:Init and FMS:Exit, principally I/O and memory management. They are not required to be thread-safe. They will never be called within a code region that might be partitioned across shared-memory execution threads.
  2. The main computational routine performs the basic calculation associated with the physics parameterization. The result is returned as either a time tendency or an adjustment.

    This routine is required to be thread-safe, as it might potentially be called in a code region partitioned across shared-memory execution threads. Partitioning is only in the horizontal. This places some restrictions on the actions such a routine might perform:

    1. No I/O may be performed, barring simple notes, or error messages. The main I/O activity must be performed by the constructor and destructor. If diagnostic I/O must be performed during the run, it must be from a separate diagnostic I/O routine called from outside the threaded region.
    2. Shared memory locations may not be written to by this routine. This applies to scalar variables: however, model fields may be updated, as their horizontal indices are distributed among the shared memory threads, and are guaranteed to have no dependencies.



The span is defined as the physical area of the planetary surface covered by a component.
The component model covering the ocean surface is conventionally referred to as the ice component.

next up previous contents
Next: The MPP modules Up: The FMS Manual: A Previous: Coding for performance   Contents
Author: V. Balaji
Document last modified