GFDL - Geophysical Fluid Dynamics Laboratory

Return to the GFDL Statistical Downscaling Team’s Home Page

The Stationarity Assumption in Statistical Downscaling


More details available in a new journal paper: (Jan 2016)

Dixon KW, Lanzante JR, Nath MJ, Hayhoe K, Stoner A, Radhakrishnan A, Balaji V, Gaitán CF (2016) Evaluating the stationarity assumption in statistically downscaled climate projections: is past performance an indicator of future results? Climatic Change. doi:10.1007/s10584-016-1598-0 (an open access publication)


[Schematic of Typical Statistical Downscaling Data Flow]
In real-world applications, the process of empirical statistical downscaling (ESD) uses three types of data files as input (solid boxes in top figure) in order to produce as output downscaled future projections (right dashed box). The downscaled projections are commonly thought of as value-added products — a refinement of global climate model (GCM) output designed to add regional detail and address GCM shortcomings via a process that gleans information from a combination of observations and the climate change response simulated by a GCM.

In the top illustration, a transform function is calculated during a training step in which a statistical technique compares observations to GCM output representing the same time period. Applying the transform function to GCM output from either the training period (blue arrow) or a future projection (red arrow) yields downscaled versions of the GCM output.

Though one can use cross validation to assess an ESD technique’s skill during the historical period, lacking observations of the future, there is no straightforward way to determine to what extent the ESD method’s skill might diminish when applied to future scenarios.

Regardless of the details of a specific ESD technique, there is an underlying assumption that transform functions computed during a training step are fully applicable to GCM simulations of the future, even though the climate itself is changing — this is what we refer to as the “stationarity assumption”. Our “perfect model” experimental design seeks to isolate and quantify key aspects of this stationarity assumption. As illustrated in the lower figure, our perfect model approach does not make use of observational data. Rather, we substitute high resolution model results for observations and we substitute coarsened (smoothed by interpolation) versions of the high resolution model output for what would be the GCM results in a more typical, real-world ESD application.

[Big Brother Experimental Design for Statistical Downscaling]

The datasets we use all derive from a set of high resolution GCM experiments — some were run to simulate the climate of recent decades and others simulate conditions at the end of the 21st century under high greenhouse gas emissions scenarios. Companion data sets were made by interpolating the high resolution (~25km) GCM output to a much coarser grid (~200km). During the ESD training step, statistical methods quantify relationships between the high resolution and coarse resolution data for the historical period. Then, using the coarsened data sets as input, we assess how well the transform functions deduced from the historical period can recover the high resolution GCM output, both for the historical period and for the late 21st century projections. Any degradation in skill computed for the future scenarios provides information as to how well the stationarity assumption holds.


  •  You may view a 30 minute video of Keith Dixon discussing this experimental design at the 2013 NCPP Quantitative Evaluation of Downscaling workshop.