GFDL - Geophysical Fluid Dynamics Laboratory

FRE User Documentation


What is FRE?

The FMS Runtime Environment (FRE) is a toolset for managing experiments from start to finish. This includes tasks such as: acquiring source code, code compilation, launching jobs to run models, and post-processing the output. FRE consists of:

  • Experiment Suite XML File: containing information about how to acquire and compile the model sources, how to run the compiled model, and how to post-process its output data. Users need to create this file to describe their models (many examples have been provided).
  • Templates: for compiling and running models (FRE-generated scripts do not require modification).
  • Conventions: such as directory structure and naming.
  • Command Line Utilities: that read the experiment suite description file and perform a specific set of functions:
    • fremake – checks out the model’s code, creates/submits compiled scripts
    • frerun – creates and submits runscripts
    • frepp – creates and submits postprocessing scripts
    • frecheck – compare regression test runs
    • frelist – lists existing experiments and details in your XML file
    • frestatus – shows status of batch compiles and runs
    • frepriority – provides job and queue control for long-running jobs
    • frescrub – deletes duplicated postprocessing files
    • freconvert – convert an XML to a newer version
    • freppcheck – reports missing postprocessing files

The following diagram shows the relationship between frerun-created and frepp-created scripts:



Please see a separate page here.


FRE relies on the Environment Modules system for dynamic modification of a user’s environment. A module modifies you current shell environment by setting environment variables that allow for dynamically loading and unloading software. To use FRE via the Environment Modules system, you must first tell the modules system where it can find your module files. Next, you must load the modules you wish to use. Modules can be located in any directory, but at GFDL and Gaea, the location is the same.

As an example, to load FRE at GFDL or Gaea:

module use -a ~fms/local/modulefiles
module load fre

You can use the following module commands to see which modules are available and which are currently loaded, respectively:

module avail
module list

The “module use” and “module load” command must be executed in the shell session that you will call FRE from. These commands should also be in any <csh> tags in your XML.

To acquire a set of sample XML files, first set your CVSROOT variable:

setenv CVSROOT /home/fms/cvs 

Then use the following command to acquire the XMLs:

cvs co -r siena_201207 xml

Selecting a Specific Version of FRE

Performing the following command will always load the default version of FRE:

module load fre

To select a specific version (such as bronx-2):

module load fre/bronx-2

The command:

module avail

Will tell you which version of a module is currently the default.

Selecting Specific FRE Subsystems

The ‘fre’ module is a meta module that loads a variety of utilities. If you do not want the entire FRE suite, or want to select a particular version of a tool, you can do that by ‘unloading’ the default version and ‘loading’ another.

For example, if FRE loads version 4.0 of netcdf by default, but you really wanted version 4.1:

module unload netcdf/4.0
module load netcdf/4.1

The following are a few of the major submodules that FRE provides:

  • fre-system: used for configuring site specific information
  • fre-commands: includes tools for aiding FRE (data transfer, file cleanup, scheduling, etc)
  • fre-nctools: includes tools for manipulating netcdf files
  • fre-curator: includes tools for standardizing and publishing data

Compiling an Experiment

The frelist tool can be used to list the names of experiments in your XML file:

frelist -x xml_file

To compile an experiment, use fremake:

fremake -x xml_file experiment_name

At the end of the generated output will be a command to submit the script to the batch scheduler. It will resemble:

TO SUBMIT => msub /home/$USER/.../experiment_name/.../exec/compile_experiment_name.csh

If you prefer, you can simply run this script in your interactive session.

To compile in batch mode, copy the submission command and run it. A job number will appear. The experiment is now in the process of compiling. If MOAB is the job scheduler you are using, then to check the status of this batch job you may do:

showq -n -v -u $USER

You may also use frestatus to print the status of batch compiles once they have started running. frestatus parses the batch stdout file, so it does not print information about jobs in the batch queue which have not yet started to run.

frestatus -x xml_file experiment_name

Running an Experiment

After your compilation has finished successfully, the next step is to submit the run scripts using frerun. Two types of runs are available for most experiments, depending on the xml tags within the experiment’s <runtime> tag: regression runs and production runs. Regression runs are short runs typically executed for testing purposes. Production runs are full-length model runs that have the ability to continue over multiple batch jobs.

Execute the following for a regression run. The string “basic” indicates that a model run will be created for each of the <run> tags inside the <regression> tag with the attribute name=”basic”.

frerun -r basic -x xml_file experiment_name

Execute the following for a production run:

frerun -x xml_file experiment_name

Like in step 1, a submit command will appear at the end of the frerun command’s output.

TO SUBMIT => msub /home/$USER/.../experiment_name/.../scripts/run/experiment_name

As with fremake, you can run interactively by running the script directly on the machine or partition for running jobs. You would need to initiate an interactive session on the compute partition with access to enough processors to run the job. Otherwise, to run in batch mode, copy the job submission command and execute it. After that, a job number will appear. The experiment is now in the process of running. Again, to check the status of this job you may use showq or frecheck.

Note that frecheck only reports the results of regression runs, not production runs. You can check the status of a run by looking at the batch script’s stdout. To locate the batch stdout file, execute:

frelist -x xml_file -d stdout experiment_name

FRE Tools

frelist : List Information in an XML File

frelist lists experiments and other information from the specified XML file.

To view available options:

frelist -h

Execute frelist:

frelist [option...] -x xml_file [experiment_name...]

fremake : Compile an Executable

fremake checks for the existence of the source code directory for the experiment to be compiled, and if it is not found, executes commands to acquire the source code from revision control systems as defined in the <source> tags in the XML file. It then creates a compile script based on the target specified on the command line and information in the <compile> tags in the XML file.

To view available fremake options run:

fremake -h

Execute fremake:

fremake [-options:] xml_file experiment_name

frerun : Run a Model

frerun creates and submits runscripts based on a shell script template and information from an XML file. To perform regression test runs, use the ‘-r [regression label]’ argument, otherwise frerun will create a runscript for a production run based on information in the <production> tag for the experiment. If “-r suite” is specified, then all scripts for all regression runs labelled basic, scaling and restarts will be created. Regression tests are shorter than production runs and have different runtime specifications according to the XML tags.

To view available run options:

frerun -h

Execute frerun for production run:

frerun [-options:] xml_file experiment_name

Execute frerun from regression run:

frerun -r [basic,scaling or restarts] xml_file experiment_name

To run a set of regression tests, replace [basic, scaling or restarts] with suite.

frepp : Post-process Model Output

frepp is a FRE utility that creates and submits postprocessing scripts.

To view available options:

frepp -h

Execute frepp:

frepp [-options:] xml_file experiment_name

frecheck : Verify the Results of Regression Tests

frecheck runs the ardiff command to bitwise compare restart files on the output produced by frerun-generated runs and prints a report. It also generates a table listing of the runtime for different processor counts by parsing the fms.out files.

To view available options :

frecheck -h

Execute frecheck:

frecheck [-options:] xml_file experiment_name

frestatus : Check the Status of Batch Compiles and Runs

frestatus shows status and progress of batch compiles and runs. It parses batch scripts stdout to relay the status of your rtsmake and rtsrun shell scripts. Note that if you ran the scripts interactively, no status information will be found since there will be no batch script stdout files.

To view available options:

frestatus -h

Execute frestatus:

frestatus [-options:] xml_file experiment_name

frepriority : Manage the Queues for an Experiment

frepriority implements job control for production runs. It changes batch queue information for an experiment. Without arguments, it will report job state, priority and remaining queue allocations for the experiment.

To view available options:

frepriority -h

Execute frepriority:

frepriority [-options] -x xml_file experiment_name

For example, to downgrade an experiment to windfall priority

frepriority -x xml_file --project windfall -p PLATFORM -t TARGET experiment_name

and to restore the same experiment to normal priority

frepriority -x xml_file --project gfdl_YOURGROUP_c2 -p PLATFORM -t TARGET experiment_name

frescrub :
Delete Unnecessary Post-processing Files

frescrub deletes duplicated postprocessing files generated by FRE. Frescrub will not delete any NetCDF files if any errors were reported for their association cpio files.

To view available opitions:

frescrub -h

Execute frescrub:

frescrub [-options:] xml_file experiment_name

freppcheck : Checks Post-processing files

freppcheck checks and reports missing postprocessing files.

To view available options:

freppcheck -h

Execute freppcheck:

freppcheck [-options:] xml_file experiment_name

freindent : Indent XML Tags

This is a utility to indent and align the tags in your XML file.

It will insert CDATA tags around your csh for safety. It will also substitute in entity values to your XML. If this is not desired, please try

xmllint --format in.xml > out.xml

frerts.csh : Streamlined Regression Testing

The frerts utility creates a shell script to launch fremake, frerun and frecheck in sequence. All the compilable experiments in the XML file will be compiled, and a suite of regression tests will be run. Then frecheck will be called, and the frecheck output will be emailed to the user.

This tool has been written by Niki Zadeh. Please see Niki for usage instructions.

diag_table_chk : Check Diag Table Syntax

Check for errors in diagnostic tables.


XML Files

XML (Extensible Markup Language) is a markup language similar to HTML, but with customizable sets of tags. We have defined tags appropriate for the information FRE tools need about climate models. FRE XML files contain all experiment-specific variables. Users may edit these XML files in order to customize the model configuration.

XML Schema

A schema and schema documentation have been developed for FRE. To see the schema documentation visit: schema.

An XML can be checked against the FRE schema with the frelist command:

frelist -C -x AM3.xml

Variables in FRE XML files

Predefined Placeholders

Predefined placeholders can be used anywhere in your XML file:

$(site)           - the currently used FRE site
$(siteDir)        - the directory with site-dependent configuration files
$(suite)          - the name of your XML file without path and extension (convenient to organize directories)
$(platform)       - value of --platform option (default is "hpcs.hpcs" @ GFDL and "doe.doe" @ ORNL)
$(target)         - value of --target option (default is "prod")
$(name)           - experiment name, also can be used without parentheses (for compatibility)
$(root)           - same as $(rootDir), also can be used without parentheses (for compatibility), might be deprecated in the future
$(stem)           - value of your <directory stem="..."> attribute
$(rootDir)        - the value of your <directory type="root"> </directory>
$(srcDir)         - the value of your <directory type="src"> </directory>
$(execDir)        - the value of your <directory type="exec"> </directory>
$(scriptsDir)     - the value of your <directory type="scripts"> </directory>
$(stdoutDir)      - the value of your <directory type="stdout"> </directory>
$(stateDir)       - the value of your <directory type="state"> </directory>
$(workDir)        - the value of your <directory type="work"> </directory>
$(ptmpDir)        - the value of your <directory type="ptmp"> </directory>
$(archiveDir)     - the value of your <directory type="archive"> </directory>
$(postProcessDir) - the value of your <directory type="postProcess"> </directory>
$(analysisDir)    - the value of your <directory type="analysis"> </directory> 

Please note that the parentheses aren’t optional here, they are required. This is done to distinguish placeholders, which are to be expanded by FRE tools, from variables that will be expanded by c-shell in the scripts.

Predefined placeholders can be used inside the XML file to refer to two classes of data items:

  1. options and arguments you supplied when you called a FRE tool (suite, name, platform, target)
  2. various directories which are defined for the selected combination of platform and target.

Rules to use directory placeholders in the <setup> section. The root directory is intended to be a reference location which many other directories are based on – so it must be defined before the directories which are dependent upon it. This means that you can define source directory using the $(rootDir), but not vice versa. Also all the directories have the specification order as above. This means you can define postProcess directory using $(archiveDir), but not vice versa, because the postProcess directory is listed after the archive directory. The $(analysisDir) is the last one, so it can’t be used to define other directories.

User-defined Placeholders

You can define your own placeholder variables with properties.

<property name = "foo" value = "bar"/>

Global properties are the immediate children of the experiment suite. Platform-dependent properties are immediate children of the platform. Please use <property> rather than XML entities when possible.

Environment Variables Expanded by FRE

These shell environment variables will be expanded by FRE:



You can customize directories where various files will be located by editing the directory specification tags at the top of your xml file.

Available Directory XML Tags

  <platform name="gfdl.default">
      <root> a reference location </root>
      <src> for FMS source code </src>
      <exec> for object files and executables </exec>
      <scripts> for run and post-processing scripts </scripts>
      <stdout> for run and post-processing stdout </stdout>
      <state> for state files to be used by multi-job runs </state>
      <work> a scratch directory where a model should run </work>
      <ptmp> an intermediate directory for input/output data caching </ptmp>
      <archive> for model output data </archive>
      <postProcess> for postprocessing output </postProcess>
      <analysis> for analysis figures </analysis>


How they might look in your xml file:

  <platform name="gfdl.default">
      <root> $HOME/$(stem) </root>
      <src> $(rootDir)/$(name)/src </src>
      <exec> $(rootDir)/$(name)/$(platform)-$(target)/exec </exec>
      <scripts> $(rootDir)/$(name)/$(platform)-$(target)/scripts </scripts>
      <stdout> $(rootDir)/$(name)/$(platform)-$(target)/stdout </stdout>
      <state> $(rootDir)/$(name)/$(platform)-$(target)/state </state>
      <work> $TMPDIR/$(stem)/$(name)/$(platform)-$(target) </work>
      <ptmp> /vftmp/$USER/ptmp </ptmp>
      <archive> /archive/$USER/$(stem)/$(name)/$(platform)-$(target) </archive>
      <postProcess> $(archiveDir)/pp </postProcess>
      <analysis> $(archiveDir)/analysis </analysis>

Directory Defaults

Directory defaults depend on site. For example, directory defaults for the GFDL Post-Processing and Analysis cluster are below:

  • stem = $(suite)
  • rootDir = $HOME/$(stem)
  • srcDir = $(rootDir)/$(name)/src
  • execDir = $(rootDir)/$(name)/$(platform)-$(target)/exec
  • scriptsDir = $(rootDir)/$(name)/$(platform)-$(target)/scripts
  • stdoutDir = $(rootDir)/$(name)/$(platform)-$(target)/stdout
  • stateDir = $(rootDir)/$(name)/$(platform)-$(target)/state
  • workDir = $TMPDIR/$(stem)/$(name)/$(platform)-$(target)
  • ptmpDir = /vftmp/$USER/ptmp
  • archiveDir = /archive/$USER/$(stem)/$(name)/$(platform)-$(target)
  • postProcessDir = $(archiveDir)/pp
  • analysisDir = $(archiveDir)/analysis

The frelist tool can list all the directories for all experiments in the XML file:

frelist --xmlfile xml_file --directory=all

If you add an experiment name, then it will list directories for this particular experiment only:

frelist --xmlfile xml_file --directory=all experiment_name

In order to get only selected directory types you can list them in the –directory option:

frelist --xmlfile xml_file --directory=work,ptmp experiment_name

Controlling directory locations with the FRE directory stem

The default directory locations are the best known practices. Using them is recommended. A directory customization called the “stem” is the recommended way to customize your file locations. The “stem” is the partial directory path to be used with the default directory locations, to describe where you want to store all your files on the different filesystems. This feature makes it easier to work with directory locations between the two sites. Here’s an example:

<experimentSuite rtsVersion="4">
  <property name="FRE_STEM" value="myfmsruns"/>
    <platform name="gfdl.default"> 
      <directory stem="$(FRE_STEM)"/>

Each platform needs only that one directory line. Then with “frelist -d all”, which lists directory paths, you can see the file locations which include the “myfmsruns” portion of the filepaths.

Don’t forget that you need to provide frelist with the “-t target” option for openmp or other non-default targets.

If you want to use a FRE stem but need to customize a directory, you can use xml like this:

<directory stem="$(FRE_STEM)"> 

Platforms and Sites

The FRE currently supports 5 “sites”.


FRE Site Name Machines Physical Location
gfdl-ws GFDL workstations Princeton, NJ
gfdl GFDL Post-Processing/Analysis Cluster Princeton, NJ
ncrc Gaea @ ORNL Oak Ridge, TN
doe Jaguar @ ORNL Oak Ridge, TN
nasa Columbia @ NASA Ames, CA


The “platform” name, used inside XML files, has a “site” prefix, which is used by the FRE to locate site-specific configuration files.


<platform name="ncrc.default">
<platform name="gfdl.default">

Please refrain from using other dots in your platform name.

Compilation Targets

FRE provides three compilation targets you can specify with the –target command line option:

  • prod : high levels of code optimization, for production runs
  • repro : compiler flags (such as -fltconsistency) to facilitate reproducible answers across different processor counts
  • debug : debug options, low optimization, high verbosity

If you do not specify a “–target” option to fremake, frerun, frestatus, etc, you will get the “production” compilation configuration.

There are predefined target extensions for other compilation needs:

  • hdf5 : add directives to generate HDF5-based NetCDF files
  • openmp : add directives to compile and link with OpenMP

You can use these in conjunction with the main targets, as in “–target repro,hdf5”.

If the predefined options are not sufficient, you can either amend the predefined targets in your XML file, or you can define your own targets. Enclose your directives into a node, and place it into a node with a “target” attribute:

    <component name="XYZ" ... >
      <compile target="TargetName">
        <cppDefs> compiler/linker directives </cppDefs>

Make templates and Targets

Users do not need to specify make template files in the xml unless they have special needs that the basic options do not cover. The default make templates are written so that there is only one make template per platform. They have various sets of compile options defined. FRE will set these automatically based on the target:

   REPRO=true if the target contains the "repro" substring 
   DEBUG=true if the target contains the "debug" substring
   NETCDF=4   if the target contains the "hdf5" substring and platform/csh contains a system 
              module which loads "netcdf-4.0.1" or newer (otherwise it will default to 3)
   OPENMP=on  if the target contains the "openmp" substring

FRE version 4 is backwards compatible with FRE version 2 and version 3 files, so leaving old settings should work and give the same results as before.

SrcList – Compiling extra source code

The <srcList> tag provides a way to specify extra source code files to include in the compilation.


It is possible for an experiment to inherit parameters from another experiment in the same XML file. Aside from a few special cases, the rules governing inheritance are as follows.

  • fremake and frerun will look inside the experiments for the data.
  • If no data is found, fremake and frerun will look for inherit in the attribute tag. If an inherit attribute is found, fremake and frerun will look inside the given experiment for the data. It will recurse in this manner until the data is found or until there are no more experiments from which to inherit.
  • If the data is not found in the inheritance tree, the value will be empty. If the value was required, an error message will be printed. If the value was optional, a warning message will be printed if you use the -v option on fremake and frerun.

If you want to run several configurations of the same model (using the same executable but different namelists or other input files) you don’t need to check out and compile the same code multiple times. Compile once and use the ‘inherit’ attribute on the remaining experiments.

Special Case: Namelists

Namelists will be parsed and given priority as follows:

1. Namelists directly in the XML of the experiment

<experiment name="mom4_test1_static10" inherit="mom4_test1">
      <namelist name="ocean_model_nml">
      baroclinic_split = 4
      surface_height_split = 1
      barotropic_split = 60
      energy_diag_freq = 12

2. Namelists in files, in the order the files are given in the XML

<experiment name="mom4_test1_static10" inherit="mom4_test1">
      <namelist file="/home/user/file1"/>
      <namelist file="/home/user/file2"/>

3. Namelists that are directly in the XML of the parent experiment.

4. Namelists files from the parent experiment.

A warning will be printed if you attempt to specify a namelist more than once in the inheritance tree, and an error will be printed if you try to give the same namelist twice directly in XML tags for an experiment. It is currently not possible to inherit only some values from a given namelist.

Special Case: Field Tables

Field tables are currently not parsed and are inherited on the basis of their file names. If you specify at least one field table, no field tables will be inherited from the parent experiment.

Special Case: Executable

The name of the executable has a default value if not specified anywhere. The default location is $root/$name/exec/fms_$name.x. The FRE scripts decide whether a child experiment should have its own executable based on whether you have specified new data in the or sections of the XML for your child experiment. If you intend for your experiment to inherit an executable, you should not re-specify anything in the sections of your child experiment, because then fremake will think you wanted to recompile with the new data. Actually, you do not need to run fremake on experiments which inherit an executable since the purpose of fremake is to create an executable.

Special Case: mkmf template

The location of the mkmf template is a special case because by default it is controlled by FRE. The default can be overridden by specifying the mkmf template in the setup section at the top of your XML file:

   <mkmfTemplate file="/path/to/template/file"/>

Or the mkmf template can be specified in the XML directly:

 FC = f90
 CPPFLAGS = -macro_expand

That in turn can be overridden on an experiment-by-experiment basis. The location of the mkmf template can be specified for an individual experiment in the compile section of the XML.

<experiment name="mom4_test1_static10" inherit="mom4_test1">
      <mkmfTemplate file="/path/to/template/file"/>


One can create time series and climatological averages with the utility frepp. This can be called from the runscript as the model runs, or offline. Please note that in order to take advantage of the postprocessing, your runs should start on January 1, otherwise monthly and seasonal averages will not be able to be properly calculated. If you want to start your model run on another date, please run from that date to January 1, then start a new run including the postprocessing. Also, the postprocessing can handle segment lengths of 1 month, 2 months, 3 months, 4 months, 6 months, or 1 year. It does not currently support other segment lengths.

Time series files consist of one file per variable. It’s not healthy for the system to store many small files in /archive, so all the files for a given time chunk are archived together in a cpio file. These cpio files are only created for time chunks up to 20 years in length. Time series of chunk length 20 year or longer will not have cpio files. After you run frepp, you’ll have both individual files and cpio files. Please use frescrub to remove the individual files when you are finished with them.

Time average files are climatology files with all variables in one file, but only one averaged time level per file. No cpios are created for time average files.

Check for missing postprocessing files

The freppcheck utility checks for missing postprocessing files. It will ask for the start year and end year you want to check. Call as follows:

freppcheck -x xml_file experiment_name

It is suggested to run freppcheck with the -A option to check all variables and cpio files before running frescrub, which will delete intermediate files.

freppcheck -A -x xml_file experiment_name

Interpolation to different vertical levels

Several options are available for interpolating your data to different atmospheric pressure levels.

Keyword Vertical Levels — pascal (Pa)
ncep 100000 92500 85000 70000 60000 50000 40000 30000 25000 20000 15000 10000 7000 5000 3000 2000 1000
am3 100000 92500 85000 70000 60000 50000 40000 30000 25000 20000 15000 10000 7000 5000 3000 2000 1000 500 300 200 100
hs20 2500 7500 12500 17500 22500 27500 32500 37500 42500 47500 52500 57500 62500 67500 72500 77500 82500 87500 92500 97500
era40 100000 92500 85000 77500 70000 60000 50000 40000 30000 25000 20000 15000 10000 7000 5000 3000 2000 1000 700 500 300 200 100
narcaap 2500 5000 7500 10000 12500 15000 17500 20000 22500 25000 27500 30000 32500 35000 37500 40000 42500 45000 47500 50000 52500 55000 57500 60000 62500 65000 67500 70000 72500 75000 77500 80000 82500 85000 87500 90000 92500 95000 97500 100000 102500 105000
ar5daily 100000 85000 70000 50000 25000 10000 5000 1000
ncep_subset 925 850 700 500 250

To interpolate native model verical levels to these vertical levels, add the attribute zInterp to the appropriate component node of your XML as follows:

<component type="atmos" source="atmos_month" zInterp="ncep">

If you want to get data both on the original model levels and interpolate to new vertical leaves, you can use more than one component, for example:

<component type ="atmos" source="atmos_month">
<component type ="atmos_era40" source="atmos_month" zInterp="era40">

This will give you the post-processing directory called atmos with data on the original model levels, and a directory called atmos_era40 with data that has been interpolated to era40 levels.

Creating postprocessing data or figures offline

It is fairly straightforward to submit the postprocessing offline if you know which arguments are needed. Here are some useful options:

         -t year      = the (first) year of data to process
         -p num       = "plus num years": additional years to process after the first year
         -d dir       = path to history directory [default $archive/$name/history]
         -A           = generate analysis figures only, based on existing pp data

Generate postprocessing data or figures from someone else’s experiment

To generate postprocessing data:

  1. Copy their XML file to a directory you own.
  2. Ensure it is a valid FRE4 xml file. If it is a FRE2 file, it may need to be converted to FRE4 with the freconvert tool. One helpful test is to run frelist -x xmlfile. Edit the top of the XML file so that the directories (or the FRE "stem") points to the places where your scripts, stdout, archive output, postprocessing output, and analysis figures should be written.
  3. Call frepp, making sure to use the -d option to specify the location of the original history files you want to process, ie, -d /archive/user/cm2/cm2o_cmip/history. For example, you might use a command like frepp -d /archive/user/cm2/cm2o_cmip/history -t 0001 -p19 -x CM2.xml cm2o_cmip to create the postprocessing data and figures for years 1-20 for experiment cm2o_cmip. 

To generate analysis figures only:

For analysis figures only, you don’t need your own copy of the xml file. Instead, you can use the -A option for analysis only, and the -O option to tell frepp where to write the output. If you want to overwrite your first attempt, you’ll need -R.

-A = run analysis only -R = regenerate, submit analysis scripts regardless of whether they already exist -O dir = where to put output figures. This argument is normally used with -A (run analysis only) and must be used if the xml file is not yours.

Here’s an example command:

frepp -A -O $HOME/analysis -d /archive/user/cm2/cm2o_cmip/history -t 0001 -p19 -x CM2.xml cm2o_cmip

Automated creation of diagnostic figures

A list of analysis scripts currently available for use with FRE is available at /home/fms/analysis/.

These analysis tags should be put inside of the tags for the data that the analysis with FRE, you need to convert your driver script to a template script by inserting the following lines and the beginning of your driver script. You only need to insert the lines your script needs.

set in_data_dir    #pp directory containing files to be analyzed (.../pp/atmos/ts/daily/5yr)
set in_data_file   #list or csh expression for all filenames to be analyzed
set descriptor     #experiment name
set out_dir        #directory to write output files
set WORKDIR        #working directory for script execution
set hist_dir       #directory containing history files
set frexml         #path to xml file
# plotting years
set yr1            #first year to analyze, from -Y
set yr2            #second year to analyze, from -Z
set specify_year   #single year to analyze
# data years, only used for making description file, only apply to ferret scripts using
# time series as input
set databegyr      #same as yr1 but adjusted if necessary to match timestamp of earliest data needed for analysis
set dataendyr      #same as yr2 but adjusted if necessary to match timestamp of last data needed for analysis
set datachunk      #the chunkLength of the timeSeries in years
set MODEL_start_yr #the beginning of the model simulation
set freq           #the frequency of the time series
# Specify batch mode "batch" or interactive mode "interactive" set mode
# Specify mom's version, either om2 or om3 because some files depend on mom's grid set mom_version
# used as mask file set gridspecfile
# used as mask file set staticfile

Then put the template script in the analysis tag in an XML file like this:

<analysis script="/home/user/bin/my_analysis_script.csh"/>

Then frepp will read your template script, specify the variables, create complete scripts, and submit them if the mode is set to batch. Here is a listing of the available attributes for the analysis tag.

<analysis switch="on" mode="batch" momGrid="om3" specify1year="4-digit year"
          startYear="4-digit year" endYear="4-digit year"
          outdir="where you want to save your figure and text outputs"
          script="your template script with full path"

Only the script attribute is required; all others are optional. Here is a description of available attributes. The first value shown for the attribute is the default setting, followed by other valid options in descending priority.

  • switch=”on|off” signals frepp to run|not run the analysis
  • mode=”batch|interactive” signals frepp to submit|not submit a script automatically
  • momGrid=”om3|om2_173jrows|om2_174jrows” i specifies the mom grid if necessary,
  • specify_year=”XXXX” specifies a single year required by user’s scripts, usually for processing daily data.
  • startYear=”XXXX” specifies a specific year to start producing figures. The default value is the first year for which postprocessing data is available.
  • endYear=”XXXX” specifies a specific year to stop producing figures. The default value is the last year for which postprocessing data is available.
  • outdir=”” specifies the output directory where you want to save your figure and text output. You can also specify a directory with the -O command line argument to frepp, or with the attribute in the setup section of your xml file. If none of the above methods are used, the figures will be placed in /net2/user/analysis/$expt
  • script=”” specifies the template script that will be read by frepp.
  • cumulative=”yes|no” specifies whether the analysis will be cumulative (from beginning of run to latest available data) or just for the date range yr1-yr2.

If your analysis scripts work with more than one experiment, then you need to tell FRE what those additional experiments are with the tag as follows.

    <addexpt xml="your-xml-file" name="your-expt-name">

Here is an example:

<analysis script="/home/fjz/rtspp_analysis/templates/atw_atmos_obsmodmod_ts_mon.csh">
   <addexpt xmlfile="/home/ccsp/fjz/ipcc_ar4/CM2.1U_Control-1990_E1.xml name="CM2.1U_Control-1990_E1"/>
   <addexpt xmlfile="/home/ccsp/fjz/ipcc_ar4/CM2.1U_Control-1860_D4.xml name="CM2.1U_Control-1860_D4"/>

In your analysis scripts, insert these lines at the beginning to refer to the additional experiments.

set descriptor
set in_data_dir
set in_data_file
set out_dir
set descriptor_2
set in_data_dir_2
set in_data_file_2
set out_dir_2
set descriptor_3
set in_data_dir_3
set in_data_file_3
set out_dir_3

Creating monthly data by averaging daily data

To average daily data into monthly data, you need to add a special monthly timeSeries with the averageOf attribute to your XML:

<component type="atmos" zInterp="ncep" start="0001" source="atmos_month">
   <timeSeries freq="daily" source="atmos_daily" chunkLength="5yr"/>
   <timeSeries freq="monthly" chunkLength="5yr"/>
   <timeSeries freq="monthly" averageOf="daily" chunkLength="5yr">
      <variables> t_ref_min t_ref_max </variables>

There are several things to note about his functionality:

  1. You can either calculate this in the same postprocessing job with the or do it anytime after the daily timeSeries has been created. (ie, you can run just this piece of postprocessing offline.)
  2. The chunkLength requested for the averaged file must be the same as the chunkLength of the daily timeSeries file, ie, 5yrs. Longer chunks of pp should pick up the new variables and create 20yr timeSeries from the 5yr ones.
  3. This is specifically designed with the noleap calendar in mind and currently does not check the model calendar.
  4. The tag is required for this type of timeSeries calculation. 

Calculating long timeseries from existing timeseries

The timeSeries and timeAverage attribute ‘from’ can override what chunklength or interval pp you want to calculate a timeSeries or timeAverage from, ie

<timeSeries freq="monthly" chunckLength="200yr" from="100yr"/>

So if you just want to create a 200yr timeSeries from existing 100yr timeSeries, your XML file should contain ONLY the timeSeries line above, with no other tags. Simply adding the line above to your existing XML file in this situation is a bad idea, because you will be wasting cpu cycles. The frepp utility will calculate each you list in your XML file, so by adding the 200yr timeSeries line above to an existing XML file, your frepp call will duplicate all of the other work that has already been done before creating the 200yr timeSeries.


Adding new variables after a model run but before postprocessing with refineDiag

It is possible to insert a snippet of csh to operate on history files, produce new history files, and save them to be postprocessed along with the other model output.

Slides about this functionality are available at

This snippet is inserted in the xml as:

<postProcess combine="staged">
         <refineDiag script="/path/to/snippet.csh"/>

Multiple <refineDiag> tags may be used. They will be executed sequentially; order is not guaranteed at this time. [Right now, no steps are taken to ensure intact history files between refineDiag scripts. This may be implemented in the future if it proves to be needed.]

The csh snippet should be written to the following guidelines:

1. It will be sourced from a script with access to the following variables. Sample values are given; these will be filled in by FRE.

  set name = c48solo_lm3p7
  set rtsxml = /home/arl/testing_fre/xml/slm4ii.xml
  set work = $TMPDIR/c48solo_lm3p7_19830101/work
  set root = /home/arl/testing_fre
  set archive = /archive/arl/testing_fre
  set scriptName = /home/arl/testing_fre/c48solo_lm3p7/hpcs.hpcs-defaultTarget/scripts/postProcess/c48solo_lm3p7_refineDiag_19830101
  set oname = 19830101
  set refineDiagDir = "$TMPDIR/c48solo_lm3p7_19830101/history_refineDiag/"
  set basedate = 1983,01,01,0,0,0

The last line there defines a directory which is empty when the script runs, and to which new history files should be moved before the snippet ends. This directory is different from the $cwd of the snippet, see below.

The snippet will also have access to the following tools:

  setenv FMSPP_PATH /home/fms/local/ia64/v11
  set NCVARS = $FMSPP_PATH/list_ncvars.csh
  set TIMAVG = "$FMSPP_PATH/timavg.csh -mb"
  set SPLITNCVARS = $FMSPP_PATH/split_ncvars.csh
  set NCEXISTS = $FMSPP_PATH/ncexists
  set FREGRID = "mpirun -np 4 /home/fms/local/ia64/v11/fregrid_parallel"
  set MPPNCCOMBINE = "/home/fms/bin/mppnccombine-2.1.7_ia64 -64"

It will execute the platform csh specified in the xml file to load appropriate modules, as it is done in the standard postprocessing scripts.

2. The snippet will be run in a directory within $TMPDIR into which precisely one year of model output history files have been extracted. The duration of time in each diagnostic file depends on the length of the production run segement; ie, you could have one file containing twelve months of diagnostic output or twelve files each containing one month of diagnostic output.

3. The new history files should have a name different from existing history files. For example, if is created from the diag table, you should not call your new files “”. Instead use a name like No requirements are placed on this name other than that it cannot be the same as one of your existing diag_table diagnostic output files. [Keep the prepended date string the same as the original history file.]

4. The snippet should save the new history files it creates to the directory $refineDiagDir. After the snippet completes, the job will save the output in $refineDiagDir to /ptmp and to a file like $archivedir/exptname/history_refineDiag/ Then future postprocessing will access that as well as the original history file output from the model, which is unchanged.

Here is an example:

FREROOT: /home/arl/refineDiag_example
xml: /home/arl/refineDiag_example/xml/refineDiag.xml
experiment: c48solo_lm3p7
refineDiag script: /home/arl/refineDiag_example/input/land_month_refine.csh
archive location: /archive/arl/refineDiag/c48solo_lm3p7

FRE Curator

The Fre Curator tools allow for data publishing on the data portal. Publishing consists of converting metadata standards, maintaining quality assurance control, recording published metadata in database, searching, discovering, navigating and understanding the data in the data portal.

The Curator Database contains metadata regarding all aspects of modeling and publishing processes. This includes model configurations and runs, model output data, observations, TDS aggregations metadata, LAS configurations, project specifics, data portal statistics and user, group, data access attributes and other admidinstrative information. Project specifics include IPCC standard variable tables.

Model Database Interface features:

The site offers a navigation panel that allows you to browse all the experiments. Furthermore, you can view corresponding experiment information. There is also a filter which allows the panel to only display experiments with specified components. Users can also compare experiments, generate analysis figures, generate RTS XML and check job status. Please view the help tag for further details and information.

There are 5 ways to access data on Data1.

  • View/navigate tables on Data1 via IPCC Ar4 variable name and experiment
  • FTP
  • HTTP
  • Live Access Server (LAS)

OPeNDAP offers the most benefits to users by allowing them to access data anywhere on the Internet and offering a wide variety of programs.

LAS is a configured web server that provides access to data using an easy to navigate user interface.

Fre-curator tools interact and operate in the following manner:


fredb-pp : Data Publishing

fredb-pp is at the highest level of the Fre Curator chain. Users can start here with only an XML experiment and arbitrary directory of model output. This tool will establish default values for lower-level tools, call the lower level tools accordingly and override default values by accepting command-line parameters. fredb-pp internally calls fredb in order to map an XML file to the Curator database. It will generate and store a checksum of the XML file to prevent redundant calls to the fredb script. The output from fredb call will be passed into a fremetar call.

To view available options:

fredb-pp -h

Execute fredb-pp:

fredb-pp [-options:] -x xml_file -n experiment_name -d model output directory

-w will call fremetar after mapping experiment to database to rewrite output metadata.

fredb : Data Publishing Manually

fredb is the next entry point into the Fre Curator chain. It is suggested that only power-users or developers use this instead of fredb-pp to manually map an experiment. It produces the experiment id, realize id and run id (triple ID) that is needed for fremetar to access the metadata. Fredb is intended to only be called once for a single experiment. It will internally call freconvert and frecanon in order to ensure that the xml syntax has been updated and that the xml is compatible for mapping into the Curator database. A successful fredb call is mandatory for fremetar to be initialized.

To view available options:

fredb -h

Execute fredb (maually):

fredb [-options:] -x xml_file experiment_name

freconvert : Update XML Syntax

freconvert coverts old experiment XML files into the latest version.

To view available options:

freconvert -h

Execute freconvert (manually):

freconvert [-options:] -i input_file output_file

frecanon : Prepare XML for Input into Curator Database

frecanon dereferences all inherited XML fields and external links to prepare an XML file for entrance into the Curator Database. It will flatten the inheritance tree of experiment configuration files into a single, independent experiment. It is a bridge between FRE tools and FREratorMap. This process makes the XML self sufficient. It can be called independently by advanced users or called automatically by fredb.

To view available options:

frecanon -h

Execute frecanon (manually):

frecanon [-options] -x input_file -o output_file experiment_name

FREMetar : Standardize Post-processing NetCDF Metadata

FREMetar was developed to eliminate the CMORization step after postprocessing. It adds or corrects NetCDF metadata using CMIPstandard metadata specifications in Curator. It works on global, axes and variable metadata. In addition, FREMetar renames NetCDF files to adhere to CMIP specifications. It is the final entry point to the Fre Curator chain and is primarily for developers. It requires the database triple ID for an experiment that was mapped to the database. FREMetar will be invoked by fredb-pp unless specified otherwise while running manually.

To view available options:

fremetar -h

Execute fremetar (manually):

fremetar -d dir -e experid -z realizid -r runid [-options:]

(must provide dir, exper_id, realiz_id and run_id) note- this triple ID produced by fredb and can be found in the database.

FREMapService / FREMapClient : Quality Assurance Control

These Java applications ensure database integrity and control write access to the Curator database. When a fredb call is made, the FREMapClient application communicates with the FREMapService and passes it the user’s runtime environment information. FREMapService will then either call delete.csh, for removing an experiment from the database, or mapping.csh, for adding one. mapping.csh is responsible for calling FREratorMap . This high level structure controls the queue of experiment mappings in order to prevent processing two XML’s at the same time and defends data integrity. FREMapService and FREratorMap are running on the cobweb machine where the curator DB is also located.


FREratorMap is a Java application that takes an XML file and maps it’s elements to fields in a database. The approximate time for this process is 5-15 minutes. After it is completed, a log file matching the date and time of the run will be found in /home/pcmdi/fre4/log. Within that log file will be experiment name, xml file name, user name, the unique triple ID and occurring errors. The triple ID can be used to obtain information from searching in the database. Experiments are stored in the DB in a manner where redundency is completely excluded. Any element of stored experiments are used in later mappings. This leads to comprehensive checking existing in DB elements during experiment loading in order to reuse them.

Automated Implementation

Data publishing can be further automated starting within the XML scripts. Users can put an analysis tag that creates a datapublisher template. This datapublisher will later call fredb-pp after calling frepp.

FRE Features in Detail

Extending a production run

After a run has finished, take the following steps:

1. Increase the length of the run in the XML to the new full length of the run. (Update the <production> tag for the experiment.)

2. Use frerun -ext to generate and submit a new runscript.

3. Use frepriority to increase the number of queue allocations.

During a run,

1. Increase the length of the run in the XML to the new full length of the run. (Update the <production> tag for the experiment.)

2. Use frerun -ext to generate a runscript. Frerun will overwrite the existing runscript. Do not use the -s option or submit the script again. The next time the script reloads, it will pick up the new version of the runscript.

3. Use frepriority to increase the number of queue allocations.

NetCDF Library Specification

For Riga and later FMS code and FRE tools, the default is to load the latest NetCDF library (module load netcdf-4.0.1) but create NetCDF classic format output files with 64 bit support. FRE will adjust the compiler options and cppDefs; you do not need to have -Duse_netCDF3 or -Duse_LARGEFILE in your XML file.

If you do want to output the new NetCDF file format, use the target “hdf5”. You can specify this along with the predefined targets listed above, ie, “–target debug,hdf5”.

Using TotalView with FRE

TotalView is GUI-based debugger. This analysis tool provides control over processes and thread execution with program state and variable visibility. This tool is ideal for reproducing and troubleshooting errors that can occur in programs.

To initiate TotalView during a fre experiment run, follow these steps:

fremake -t debug -x xml_file experiment_name

Submit job interactively (do not use qsub), for example:


After fremake is finished, do a frerun:

frerun -t debug [options] -x xml_file experiment_name

When that completes, the path to an executable with appear. This runscript must be edited for totalview to work. There are two steps to editing this file. First, you need to add a module load in the “global environment settings” section. There will be a list of module loads there. One needs to be added for totalview. *Note the example shows one edition of totalview, use the edition specific for you. To see available editions, do a module avail and select from there.

module load totalview.8.8.0-0

Second, you need to edit one line in the runscript. The line will look as follows:

/usr/bin/time -p mpirun -np $npes $executable:t |& tee fms.out
  • Note that mpirun is used on hpcs and apirun is used for doe. Use which one applies to you. Change line to:
totalview mpirun -a -np $npes ./$executable:t

Now you can save the runscript and execute it. A few minutes into the run, Totalview will appear as 3 windows.

Click on the TotalView link for user documentation.

Compiling with Libraries

FRE can use pre-compiled libraries (.a files) at the link time if their paths are provided in a <library> subsection inside a <:component> section as shown in the example below:

<component name="fms"  paths="shared" includeDir="/PATH_TO_src_DIR/shared/include" >
       <library path="/PATH_TO_LIBS_DIR/libfms.a"  headerDir="/PATH_TO_LIBS_DIR"/>

Note that “headerDir” is where the .mod files for that library are located and they are needed to be present and uptodate at the link time. The “make” utility decides whether the library is up-to-date and recompiles it if needed.

Also note that the <library> subsection replaces both the <source> and the <compile> subsections in the <component> .

A working example of using precompiled libraries is given below. In this example the libraries are already compiled in the parent experiment “mom4p1_coupled” and only one component (“mom4p1”) needs to be compiled.

<property name="MY_LIBS" value="$root/mom4p1_coupled/$(platform)-$(target)/exec"/>

<experiment name="mom4p1_coupled_10by5" inherit="mom4p1_coupled">
Same mom4p1_coupled  but with static memory allocation.
   <component includeDir="$root/mom4p1_coupled/src/shared/include" paths="shared" name="fms" >
     <library path="$(MY_LIBS)/libfms.a"
         headerDir="$(MY_LIBS)"/>       </component>

   <component paths="ocean_shared" requires="fms" name="ocean_shared" >
     <library path="$(MY_LIBS)/libocean_shared.a"
         headerDir="$(MY_LIBS)"/>       </component>

   <component paths="atmos_null" requires="fms" name="atmos_null" >
     <library path="$(MY_LIBS)/libatmos_null.a"
         headerDir="$(MY_LIBS)"/>       </component>

   <component paths="ice_sis ice_param" requires="fms" name="ice_sis" >
     <library path="$(MY_LIBS)/libice_sis.a"
         headerDir="$(MY_LIBS)"/>       </component>

   <component paths="land_null" requires="fms" name="land_null" >
     <library path="$(MY_LIBS)/libland_null.a"
         headerDir="$(MY_LIBS)"/>       </component>

  <component name="coupler" paths="coupler" requires="fms land_null atmos_null ice_sis mom4p1_static_10by5">
     <library path="$(MY_LIBS)/libcoupler.a"
         headerDir="$(MY_LIBS)"/>      </component>

   <component paths="mom4p1" requires="fms ocean_shared" name="mom4p1_static_10by5" >
     <source root="/home/fms/cvs" versionControl="cvs" >
       <codeBase version="$(MOM_CVS_TAG)" > mom4p1_coupled </codeBase>

    -DMOM4_STATIC_ARRAYS -DNI_=360 -DNJ_=200 -DNK_=50 -DNI_LOCAL_=36 -DNJ_LOCAL_=40"


Development Runs that Do Not Write to Archive

With the new “frerun –noarchive” feature, you can run regression tests interactively without saving your output data to /archive. No cpio’s will be created, which means things should go faster for you when you’re trying to run short tests as part of a development cycle. To use it:

frerun --noarchive -r basic -x xml_file expt

Things to keep in mind:

1. Note that frecheck currently will not find output data that has not been archived. Support for this will be added to frecheck in a future release.

2. Your output data will be saved in the “ptmp” location. The default for non-IPCC users is


Thus the /ptmp directory will be local to $TMPDIR, so you will only be able to see your output data if you run interactively. (If you run in batch, it’ll get deleted at the end of the job.)

Your output data will be in a path like


also known as


3. If you do a run interactively with riga xml settings, you’ll end up with


You might be tempted to “wipeftmp” between runs, but you may want to be more specific than that — Keeping the input data in the $TMPDIR/ptmp directory will help speed data movement along.

4. If you run the same runscript multiple times during debugging, be careful of pre-existing output data in your ptmp directory.

FRE Properties


Many site specific adjustments belong in the file.

For example, FRE needs to know where various directories are within the system. This is a setting that can differ greatly based on which site is in use.

Each line of this file defines a setting, which is called an “external property”. Rather than hardcode directory locations and settings, the properties file gives the user the ability to easily configure different settings for different platforms.

A property is nothing more than a name-value pair, where each instance of “name” will be replaced by “value” within a script or XML file.

Properties can be referred to using the syntax:


Note: two properties do not follow this convention — $root and $name. This syntax is deprecated, so do not use it for new XML files that you create!

The name-value pairs within the file are a special type of FRE property that is defined outside of the XML — thus they are given the name “external properties”.

Be very careful when modifying the — an incorrect setting can break your FRE installation!

Constant Properties

FRE supports only one constant property:

  • site – the name of the current site

This is defined in the fre-system/init file, and cannot be changed at any other time.

Command Line Dependent Properties

A number of properties depend on the values provided to the various FRE tools via the command line. A few examples:

  • name – the name of the current experiment — if called with multiples experiment names, then FRE will use them all sequentially
  • platform – the value of the –platform option, prefixed by the value of the platformSite property, if needed
  • platformSite – a site prefix for the –platform option if it exists, otherwise this will be the value of the site property
  • platformSiteRoot – the first part of the platformSite property without the numerical extension
  • platformSiteHead – the first part of the platformSiteRoot property before the dash
  • platformTail – the second part of the –platform option value (after the dot)
  • remoteUser – the value of the –remote-user option
  • siteDir – the directory where FRE stores its configuration files (it depends on the value of the platformSite property)
  • suite – the name of the XML file (defined by the –xmlfile option) without the directory path and extension
  • target – the value of the –target option
  • xmlfileOwner – the user id of the XML file owner

If you don’t provide the options –platform and/or –target, then FRE will assign default values to these properties.

The default value for –platform is “$(site).default” and the default value for –target is “prod”.

The –xmlfile option also has a default value – it’s “rts.xml”.

Customizable Directories

Customizable directories are defined in the <directory> tag of the XML.

The possible directory types:

  • root – usually a mount point for other directory types
  • src – directory for source files
  • exec – directory for compiled executables, libraries, etc.
  • scripts – directory for various types of scripts, contains the subdirectories “run”, “postProcess”, etc.
  • stdout – directory for model standard text output
  • stdoutTmp – temporary directory for model standard text output (which is used on Gaea only now)
  • state – directory to store model state files
  • work – directory for model working files (fast scratch)
  • ptmp – cache directory between working directory and archive
  • stmp – cache directory for post-processing (which is not used for now)
  • archive – directory for long-term storage of model output, contains subdirectories “ascii”, “history”, “restart”
  • postProcess – post-processing directory
  • analysis – analysis directory

A useful property for defining directories is the “stem” property. If not defined, it defaults to the value defined by The “stem” is just a prefix that will be applied to directories that are defined with it. The “stem” can only be defined once per platform. Example:

<directory stem="/home/$USER/...">

You can define a directory’s type in one of two ways:

<directory type="type">/path/to/directory</directory>

Using the second style saves you from repeating the <directory> tag. An example of the second style:


Each <directory> element will create a corresponding property with a name equal to the directory type plus the string “Dir”. If you don’t define all directory types, then defaults will be defined for you (from

Two special property definitions exist:

$root = $(rootDir)
$name = $(name)

The odd naming syntax for these two properties exists for backwards compatibility. You should always use the $(name) syntax.

You can define directories in terms of other directories, but to avoid infinite loops, you can only refer to directories defined previously in the definition list.

For example, the root directory (the first one in the list) can be referenced by any other directory, while the analysis directory (the last one) can’t be referenced by any other directories.

An example of the src directory:


Internal Properties

Internal properties are defined within the XML file. You can declare them in one of two ways:

<property name="name" value="value"/>
<property name="name">value</property>

An example:

<property name="archive_prefix" value="/archive/afy/FRE/siena">
<dataFile ...>

Properties are evaluated from the top to the bottom of the XML file, so a property value can refer to previously defined properties. Undefined properties will trigger an error.

Internal properties may be defined only as direct children of the <experimentSuite> element or <platform> element.

When you run a FRE tool, you have to use the –platform option to define a platform (unless you use the default platform). This allows FRE to use both global and platform specific properties.

Platform dependent properties can reference global properties if the global properties are declared before the <setup> tag. Similarly, global properties can refer to platform dependent properties if the global properties are defined after the
<setup> tag.

An example:

  <platform name="nasa.default">
    <property name="sgi_flags" value="-Duse_SGI_GSM"/>
  <platform name="">
    <property name="sgi_flags" value="">
<experiment ...>
  <component ...>
        -Duse_netCDF -Duse_libMPI -DSPMD -DNUM_PROG_TRACERS_=3 
        -DNUM_DIAG_TRACERS_=0 -DLAND_BND_TRACERS -Duse_shared_pointers 

The placeholder $(sgi_flags) will be expanded into the string “-Duse_SGI_GSM” on the “nasa.default” platform, and to an empty string on the “” platform. This allows for compilation to occur on both platforms without any changes in the XML file.

For the extra curious, properties are implemented inside the files:,, and

FRE C-Shell Inserts

The FRE frerun inserts all the content of all the <csh> XML elements into various places of the runscript template, depending on their parent XML element and their attribute “type” value.

The value of the attribute “type” affects only <csh> inserts, which are direct children of the <input> XML element. In this case the <csh> content can be inserted into two different places in the runscript (before and inside the main loop). Also this content can be automatically surrounded by if-endif brackets with a condition, which allows to execute this content on or after the first run of the runscript.

These rules are summarized in the table below:


Parent XML Element Value of the Attribute “type”
Not one of “init”, “always” or “postInit” “init” “always” “postInit”
<platform> The <csh> content is inserted in the beginning of the runscript, replacing the pragma “#FRE setup-platform-csh
<runtime> The <csh> content is inserted before the main loop, replacing the pragma “#FRE experiment-runtime-csh
<input> The <csh> content is inserted
before the main loop, replacing the pragma
#FRE experiment-input-csh-init
The <csh> content is inserted
before the main loop, replacing the pragma
#FRE experiment-input-csh-init“. It’s prefixed by
the condition “($irun == 1 && $ireload ==
The <csh> content is inserted
inside the main loop, replacing the pragma
#FRE experiment-input-csh-always-or-postinit
The <csh> content is inserted
inside the main loop, replacing the pragma
#FRE experiment-input-csh-always-or-postinit“. It’s
prefixed by the condition “($irun != 1 ||
$ireload != 1)
<postProcess> The <csh> content is inserted inside the main loop, replacing the pragma “#FRE experiment-postprocess-csh


All the experiment-level <csh> inserts are inherited from parent experiments, meaning that if you don’t define the corresponding <csh> insert for the current experiment, then its content will be taken from its parent experiment (and so on recursively). If you don’t want the current experiment to inherit anything from its parent, then please define an <csh> insert with at least one space or linebreak.

IMPORTANT NOTE: The behavior described above is the original design. However, it has been recently found that the inheritance of typeless <csh> elements can be broken by presence of <csh type=”init”> elements and vice versa. Also, the inheritance of <csh type=”always”> elements can be broken by presence of <csh type=”postInit”> elements and vice versa. The reason – these four types are actually processed pair by pair in the frerun, so it treats first two types and second two types as a whole. These quirks will be cleaned soon.


FRE Targets

There are two places in the FRE toolkit where complilation and linkage directives can be defined – system-wide configuration files (known as mkmf templates) and your XML experiment suite configuration file (<cppDefs> XML elements). These configuration files are used to store system-wide directives, which can be amended by user-defined directives.

The FRE toolkit contains one mkmf template per compiler. All these templates have been recently redesigned by V.Balaji, so the make utility can now accept a number of overrides, which effectively force the make to select and use sets of directives from the corresponding mkmf template. User-defined directives also consist of sets, which are embedded into the <compile> elements. These elements can have a target attribute, which allows the FRE toolkit to address a particular set of directives.

So, all the compiler and linkage directives are grouped into sets, and a user must have a mechanism to select and use these sets. This mechanism is implemented using a command-line option --target, which is used by the fremake tool to build executables and libraries. This option has been used by FRE tools for a long time, the difference here is that it can accept comma or dash-separated lists of values now. If you don’t supply this option when calling any FRE tool, then the FRE toolkit will assume that the option --target=prod was used.

Not all the possible make overrides are controlled by this target list. For example, the override VERBOSE=on will be dynamically added to all the make calls in your compile script during its execution, provided that you execute it as a batch job. The set of possible make overrides might be extended in the future, and some new ways to control them might be added as well.

All this allows to simplify the XML file a little bit – you can remove all the <cppDefs> elements from your components, provided that you are satisfied with predefined compilation directives (please look at corresponding mkmf templates to get the feel of them).

CREDITS: The targets mechanism has been designed by V.Balaji, Amy Langenhorst and Aleksey Yakovlev.

The –target Option Value

The target option value can be a list, consisting of a number of targets, separated by comma and/or dash (it actually means that comma and dash aren’t allowed inside targets, and spaces in the target list aren’t allowed either). All the targets can be predefined or user-defined ones. Predefined targets can be starters and followers. Starter targets are mutually exclusive – they can’t be used together in a single target list. Follower targets can be used with any starter target, multiple follower targets are allowed as well. Not more than one user-defined target is allowed in the target list.

Predefined targets are:

Target name Target type Make override What does it do
prod starter regular production directives
debug starter DEBUG=on add debugging directives
repro starter REPRO=on add reproducibility directives
hdf5 follower NETCDF=4 add directives to generate HDF5-based netCDF files
openmp follower OPENMP=on add directives to compile and link with OpenMP

You are allowed to list all the targets in your target option in any order you want – they will be checked and reordered by FRE tools. The standard order is: the starter target, then any number of follower targets, then a user-defined target. This order is used by FRE tools in case they need to create directories, which depend on the $(target) placeholder.

Compilation and/or Linkage Directives in Your XML File

If you need any special compilation and/or linkage options, which aren’t available via predefined targets, then you can either amend the predefined targets in your XML file, or you can create your own sets of directives. The syntax is exactly the same in both cases (it wasn’t changed) – you need to enclose your directives into <cppDefs> elements, then place these nodes into <compile> elements with target attributes:

<component name="XYZ" ... >
  <compile target="TargetName">
      ... compiler and/or linkage directives ... 

If the “TargetName” above is equal to one of predefined target names, then all the “compiler and/or linkage directives” will be added to corresponding directives, targeted by this name in the system-wide mkmf template. If the “TargetName” is your own (not predefined) name, then you can add this name to targets, listed in the –target option value – all your directives will be added to directives, selected by predefined targets. You can have any number of <compile> nodes with your own “TargetNames” attribute inside your component, but only one of them is allowed on the command line as a part of the list of targets. A <compile> element without a target attribute (or with empty-valued target attribute) allows you to add directives, defined inside it, to any target list. This element should contain directives, which always have to be added to directives, defined earlier.

Remember that the “target” attribute in <compile> elements can’t contain lists of target names – these lists are allowed on the command line only.

Direct Control of Make Overrides in Your XML File

Sometimes it’s too limiting to apply the same make overrides to the whole model. For example, you might need to compile all model components with the make override "OPENMP=on", but one component must be compiled without this override. You can selectively control these overrides on a component by component basis. For example, if you need to make sure that the “GOLD” component is compiled without the make override "OPENMP=on", then you need to add the <makeOverrides> element below:

<component name="GOLD" ... >
    <makeOverrides> OPENMP="" </makeOverrides>

The value of the <makeOverrides> element above will be added to the make call for this component after all other overrides, so the make program will get the empty value for the OPENMP override, which will be interpreted as absence of this override.


Targets are implemented inside the FRE library, starting from the version of 12/22/2009. All the FRE tools, based on this library, are automatically targets-enabled.


Specifying data files at multiple sites

A useful method of specifying data files is to specify one or more root locations in the platform tag with properties.

<platform name="ncrc.default">
  <property name="FMS_ARCHIVE_ROOT" value="/lustre/fs/archive/fms"/>

Then the <dataFile> element can make use of these root locations:

<dataFile label="input" target="INPUT/" chksum="" size="" timestamp=""> 
  <dataSource platform="$(platform)">$(FMS_ARCHIVE_ROOT)/module_data/quebec/</dataSource> 

Because the “platform=$(platform)” condition is always true this
is equivalent to:

<dataFile label="input" target="INPUT/" chksum="" size="" timestamp="">$(FMS_ARCHIVE_ROOT)/module_data/quebec/</dataFile> 

This is also equivalent to the following, where “gfdl” and “ncrc” are sites known to FRE. This syntax allows you to customize input files by site if necessary.

<dataFile label="input" target="INPUT/" chksum="" size="" timestamp=""> 
  <dataSource site="gfdl">$(FMS_ARCHIVE_ROOT)/module_data/quebec/</dataSource> 
  <dataSource site="ncrc">$(FMS_ARCHIVE_ROOT)/module_data/quebec/</dataSource>

Note that a subset of /archive/gold is currently synced to /lustre/fs/archive/fms/gold. A subset of /archive/cm3 directory is synced to /lustre/fs/archive/fms/cm3. If there is data missing from /lustre/fs/archive/fms, let Amy know and she will sync it.

Input Data Staging

Input data staging at Gaea is needed in order to transfer needed files from LTFS to FS prior to a run. The batch nodes do not have LTFS mounted. This means jobs needing access to files on LTFS will not work properly. In order to handle this there are two equivalent modes of input data staging available in FRE. Once the data staging script has run successfully, as long as the relevant paths on the FS filesytem still exist, runscripts can be submitted many times without requiring another input data staging script.

frerun option –submit-chained

The “submit-chained” mode works by having frerun create the normal runscript. This runscript first initializes an input data staging job. That job loads input data into the “ptmp” directory before it submits the regular runscript as the second job. Once the data staging is complete, another job is started which is the actual job itself. This means submit-chained only creates one job at a time. You can use “frerun –submit-chained” or to submit the runscript directly after running frerun without a submit option, do:

 msub -N <name> -l size=1,walltime=<time>,partition=es -q eslogin -v FRE_STAGE=CHAIN <script> 

A useful csh/tcsh alias for submitting a chained job for this purpose is provided in your environment by FRE. Simply use:

 msub_chain <script> 

frerun option –submit-staged

The “submit-staged” method works by having frerun create the normal runscript and submit it twice. Two jobs are submitted at once. The first job contains a submission with options so that it is forced to work in “staging mode”. This will cause the copy over from LTFS to FS. The second job is the actual runscript job. The second job will wait in the blocked queue until the first job completes. Your job cannot not run until all the necessary files are transfered to the FS filesystem. When they are, job 1 will complete and the run job, job 2, will start. You can use “frerun –submit-staged” or to submit the runscript directly after running frerun without a submit option, do:

 msub -N <name> -l size=1,walltime=<time>,partition=es -q eslogin -v FRE_STAGE=INPUT <script> 

The second runscript is submitted with options which force it to wait for the first job to finish. This means the <jobId> reported by the first msub submission must be used in the second command:

 msub -l depend=afterok:<jobId> <script> 

<name> can be any name for input staging job. This is not mandatory. The name is set in the runscript but this declaration will allow users to name the staging job differently so its output file will also have a different name.

<time> is its walltime

<script> is regular runscript

A useful csh/tcsh alias for submitting a chained job for this purpose is provided in your environment by FRE. Simply use:

 msub_stage <script> 

Interactive input data staging

Input data staging can also be run interactively from a login node:


where <script> is the regular runscript generated by frerun (without –submit-chained or –submit-staged options).

Naming Conventions

Source Code and Compilation

The program fremake creates a CVS checkout script in the $(src) directory. See the “Predefined Placeholders” and “Directories” sections of this document for more information on how the directory locations are defined.

The compilation script is then created in the $(exec) directory. The executable created by the script will be in that directory as well.


The program frerun will create one or more runscripts. There are two methods of determining the name for a runscript based on whether you are running regression tests ( frerun is invoked with the -r regression_name argument) or a production run ( frerun is invoked without the -r regression_name argument).

For production runs, frerun will create the runscript as $(scripts)/run/$name, deriving the runtime information from the <production> element in your XML file. An example element is shown here.

   <production simTime="8" units="years" npes="45">
      <segment simTime="1" units="months" runTime="00:44:00"/>

The production runscript will run the full simulation time of 8 years in 1 month segments as denoted above, restarting itself as needed every 8 hours of run time.

For regression tests, frerun will derive the runtime information from the <regression> element(s) in your XML file. Example elements are shown here.

      <regression name="basic">
         <run days="8" npes="15" runTimePerJob="00:30:00"/>
      <regression name="restarts">
         <run days="4 4" npes="15" runTimePerJob="00:20:00"/>
         <run days="2 2 2 2" npes="15" runTimePerJob="00:20:00"/>
      <regression name="scaling">
         <run days="8" npes="1" atmos_layout="1,0" ice_layout="1,0" runTimePerJob="04:00:00"/>
         <run days="8" npes="3" runTimePerJob="02:00:00"/>
         <run days="8" npes="45" runTimePerJob="00:20:00"/>
         <run days="8" npes="60" runTimePerJob="00:20:00"/>

To run a regression test with the information in the regression element labeled “basic” above, use frerun -r basic experiment name. Then a runscript will be created at $(scripts)/run/$name_$runparams, where $runparams is a string determined by the length of the run, the number of times the executable is called within the script, and the number of processors used. In the “basic” example above, $runparams would be 1x0m8d_15pe, which translates to “one times zero months, eight days on 15 processors”.

A single frerun command may create more than one runscript for a given experiment. The runscripts will have different $runparams strings. With the example XML above, frerun -r restarts $name would create two runscripts, and frerun -r scaling $name would create four runscripts. The program frerun also recognizes the keyword suite, which would create runscripts from each of the three regression elements basic, restarts and scaling.

Runscript Output

Output directories are placed in the $archive/$name directory. If you specify the following:

<directory type="archive">/archive/$user/fre/$name</directory>

Then production output would be as follows for experiment am2p10:

|-- ascii - directory containing text output files
|-- history - directory containing output as specified by the diagnostics table
|-- restart - directory containing datasets which can be used as initial conditions for future runs
|-- pp - directory containing post-processed history output

Regression test output utilizes another output directory level named for the runtime information as described above. The example regression elements shown in the previous section would produce the following output structure:

|-- 1x0m8d_15pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 1x0m8d_1pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 1x0m8d_3pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 1x0m8d_45pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 1x0m8d_60pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 1x1m0d_45pe
|   |-- ascii
|   |-- history
|   `-- restart
|-- 2x0m4d_15pe
|   |-- ascii
|   |-- history
|   `-- restart
`-- 4x0m2d_15pe
    |-- ascii
    |-- history
    `-- restart 

User contributed scripts/modifications

Any additional features, scripts or modifications produced by FRE users will be added here. They will be organized by topic. The user who contributed these additional features will be labeled with parenthesis.


There is a modification of freconvert that prints all the dataSource files in the xml that is being converted to fre4. (Niki Zadeh)

cvs up -r listDataSourceFiles_nnz freconvert

Warnings and caveats

  • Please do not rely on $FREROOT in your XML files. It is deprecated because system modules now control the root location of the FRE installation. Use $(rootDir) or another appropriate directory path.
  • Do cd $work/INPUT at the beginning of the csh sections in the xml file if the commands need to operate in the INPUT directory.
  • Note that the utility FRE uses to copy data (hsmget) will make a directory for the contents of an archive file, stripping off the extensions: tar, cpio, nc. Therefore, this utility may have undesirable results if files called “foo.tar”, “foo.cpio” and “” are in the same directory. If your data follows this pattern, please rename your files to avoid the conflict. Please note that the order of resolving archives from high to low priority is: tar, nc.tar, cpio, nc.cpio.
  • Some csh sections may need to be modified if they attempt to modify input data files in INPUT. This happens because these files may be a unix hard link to what should be a fixed copy of the input data. If your csh fits this scenario, please modify it to operate on a copy of the input data file.
  • qa -n 30 will give a wider column for the experiment name.
  • Check your csh sections for full paths to tools like “combine-ncc”. In order to get the latest version provided by the FRE environment modules, do not use a full path for the following tools: combine-ncc, decompress-ncc, fregrid, fregrid_parallel, list_ncvars.csh, mppnccombine, nccmp, ncexists,, split_ncvars.csh, stacksize.csh, timavg.csh, varlist.csh, container.csh, list_months.csh, and split_into_months.csh.

If fremake -t debug gives errors

Errors look like:

ld: fms.x: short data segment overflowed (0x5a10f8 >= 0x400000) ld: can’t relax section: No such file or directory make: *** [fms_c48L48_am3p9_riga_201006.x] Error 1

Compiling without the debug flag works, but when using the debug mode fremake -t debug, it gives errors as shown above.

This is a memory overflow problem. A work around is to reduce the complexity of the compiler options you are using to compile.

  • 1) Go to the directory where your debug executable is.
  • 2) Locate the modules that you load in the compile script.
  • 3) Load these modules.
  • 4) do an “ls -lSr” in the directory and find the largest .o file.* Typically this will be mpp_domains.o
  • 5) rm that file
  • 6) run the compile script and when it starts to compile that file, type Ctrl-C.
  • 7) Cut and paste the command that the script gives you as a compile line and remove “-check -check noarg_temp_created -check nopointer ” from the line. Hit enter.
  • 8) You may want to do this with a few (up to 5) of the largest .o files.
  • 9) Run the compile script.
  • 10) If it fails then return to step 5 and repeat as needed.