next up previous contents
Next: General design specification Up: The FMS Manual: A Previous: Contents   Contents

Subsections


Overview

Introduction

In climate research, with the increased emphasis on detailed representation of individual physical processes governing the climate, the construction of a model has come to require large teams working in concert, with individual sub-groups each specializing in a different component of the climate system, such as the ocean circulation, the biosphere, land hydrology, radiative transfer and chemistry, and so on. The development of model code now requires teams to be able to contribute components to an overall coupled system, with no single kernel of researchers mastering the whole. This may be called the distributed development model, in contrast with the monolithic small-team model of earlier decades.

A simultaneous trend is the increase in hardware and software complexity in high-performance computing, as we shift toward the use of scalable computing architectures. Scalable architectures come in several varieties, including shared-memory parallel vector systems, distributed memory massively-parallel systems, and distributed shared-memory NUMA systems. The individual computing elements themselves can embody complex memory hierarchies. To facilitate sharing of code and development costs across multiple institutions, it is necessary to abstract away the details of the underlying architecture and provide a uniform programming model across different scalable and uniprocessor architectures.

These developments entail a change in the programming paradigm used in the construction of complex earth systems models. The approach is to build code out of independent modular components, which can be assembled by either choosing a configuration of components suitable to the scientific task at hand, or else easily extended to such a configuration. The code must thus embody the principles of modularity, flexibility and extensibility.

The current trend in model development is along these lines, with systematic efforts under way in Europe and the U.S to develop shared infrastructure for earth systems models. It is envisaged that the models developed on this shared infrastructure will go to meet a variety of needs: they will work on different available computer architectures at different levels of complexity, with the same model code using one set of components on a university researcher's desktop, and with a different choice of subsystems, running comprehensive assessments of climate evolution at large supercomputing sites using the best assembly of climate component models available at the moment.

The shared infrastructure currently in development concentrates on the underlying ``plumbing'' for coupled earth systems models, building the layers necessary for efficient parallel computation and data transfer between model components on independent grids.


The GFDL Flexible Modeling System

The Geophysical Fluid Dynamics Laboratory (NOAA/GFDL) undertook a technology modernization program beginning in the late 1990s. The principal aim was to prepare an orderly transition from vector to parallel computing. Simultaneously, the opportunity presented itself for a software modernization effort, the result of which is the GFDL Flexible Modeling System (FMS). FMS is an attempt to address the need to develop high-performance kernels for the numerical algorithms underlying non-linear flow and physical processes in complex fluids, while maintaining the high-level structure needed to harness component models of climate subsystems developed by independent groups of researchers. It constitutes a specification of standards, and a shared software infrastructure implementing those standards, for the construction of climate models and model components for vector and parallel computers. It forms the basis of current and future coupled modeling at GFDL. In 2000, it was benchmarked on a wide variety of high-end computing systems, and runs in production on three very different architectures: parallel vector (PVP), distributed massively-parallel (MPP) and distributed shared-memory (DSM)1.1, as well as on scalar microprocessors. Models in production within FMS include a hydrostatic spectral atmosphere, a hydrostatic grid-point atmosphere, an ocean model (MOM), and land and sea ice models. In development, or scheduled for inclusion, are a non-hydrostatic atmospheric model, an isopycnal coordinate ocean model, and an ocean data assimilation system.

The shared software for FMS includes at the lowest level a parallel framework for handling distribution of work among multiple processors, described in MPP. Upon this are built the exchange grid software layer for conservative data exchange between independent model grids, and a layer for parallel I/O. Further layers of software include a diagnostics manager for creating runtime diagnostic datasets in a variety of file formats, a time manager, general utilities for file-handling and error-handling, and a uniform interface to scientific software libraries providing methods such as FFTs. Interchangeable components are designed to present a uniform interface, so that for instance, behind an ``ocean model'' interface in FMS may lie a full-fledged ocean model, a few lines of code representing a mixed layer, or merely a routine that reads in an appropriate dataset, without requiring other component models to be aware which of these has been chosen in a particular model configuration. Coupled climate models in FMS are built as a single executable calling subroutines for component models for the atmosphere, ocean and so on. Component models may be on independent logically rectangular (though possibly physically curvilinear) grids, linked by the exchange grid, and making maximal use of the shared software layers.

This document provides a description of the overall design of FMS, with a specification of the coding constructs required of developers building elements of the FMS. We first lay out the overall structure of FMS, followed by a section on general coding standards, and finally standards specific to different FMS elements.


Purpose of the Manual

The documentation for FMS is divided in three categories: a developer's guide; a user's guide; and a technical description where appropriate. This document serves as the developer's guide. We visualize several categories of developer: consider

The user's guide and technical description are distributed as modular documentation along with the code. The user's guide described the use and call syntax associated with an FMS module, and the technical description provides more details on the algorithms and their implementation. As the examples listed above show, the category of user we call ``developer'' is broad - every user is a potential developer - and needs a broader document linking the whole. While remaining closely linked to the user guide and technical documents, this developer's guide will provide an understanding of design principles on which the module interfaces and data structures in FMS are constructed, and how it is intended to be used. The underpinning for this is provided by the standards describing the design specification for FMS. It is hoped that the manual will permit a free and open manner for the user/developer to interact with FMS, including the design and building of extensions, porting to new platforms, and replacing any FMS component with new code that will be usable across the FMS community.



Footnotes

... (DSM)1.1
Also known as cache-coherent non-uniform memory access (ccNUMA) architecture.

next up previous contents
Next: General design specification Up: The FMS Manual: A Previous: Contents   Contents
Author: V. Balaji
Document last modified