Skip to content
Return to the GFDL Statistical Downscaling Team’s Home Page

The Stationarity Assumption in Statistical Downscaling

[icon for 1 page summary of Perfect Model framework]CONTRASTING A TYPICAL APPLICATION OF STATISTICAL DOWNSCALING WITH OUR “PERFECT MODEL” EXPERIMENTAL DESIGN

PDF summary available:

More details available in the journal articles:

Lanzante, JR, KW Dixon, MJ Nath, CE Whitlock, D Adams-Smith (2018): Some Pitfalls in Statistical Downscaling of Future Climate. Bulletin of the American Meteorological Society, doi:10.1175/BAMS-D-17-0046.1

Dixon KW, JR Lanzante, MJ Nath, K Hayhoe, A Stoner, A Radhakrishnan, V Balaji, CF Gaitán (2016): Evaluating the stationarity assumption in statistically downscaled climate projections: is past performance an indicator of future results? Climatic Change. doi:10.1007/s10584-016-1598-0

The perfect model design also was used in the papers:

Lanzante, JR, KW Dixon, D Adams-Smith, MJ Nath, and CE Whitlock (2021, in press): Evaluation of some distributional downscaling methods as applied to daily precipitation with an eye towards extremes. International Journal of Climatology. doi:10.1002/joc.7013

Lanzante, JR, D Adams-Smith, KW Dixon, MJ Nath, CE Whitlock, (2020): Evaluation of Some Distributional Downscaling Methods as Applied to Daily Maximum Temperature with Emphasis on Extremes. International Journal of Climatology, doi:10.1002/joc.6288.

Lanzante, JR, MJ Nath, CE Whitlock, KW Dixon, and D Adams‐Smith (2019): Evaluation and improvement of tail behaviour in the cumulative distribution function transform downscaling method. International Journal of Climatology, 39, 2449–2460, doi:10.1002/joc.5964


NOTE: The following is a description of the perfect model experimental design used in the papers listed above. Subsequently, the GFDL ESD team has experimented with perfect model design variations that aim to focus on different and complementary aspects of bias correction and statistical downscaling method performance.


[Schematic of Typical Statistical Downscaling Data Flow]
In real-world applications, the process of empirical statistical downscaling (ESD) uses three types of data files as input (solid boxes in top figure) in order to produce as output downscaled future projections (right dashed box). The downscaled projections are commonly thought of as value-added products — a refinement of global climate model (GCM) output designed to add regional detail and address GCM shortcomings via a process that gleans information from a combination of observations and the climate change response simulated by a GCM.

In the top illustration, a transform function is calculated during a training step in which a statistical technique compares observations to GCM output representing the same time period. Applying the transform function to GCM output from either the training period (blue arrow) or a future projection (red arrow) yields downscaled versions of the GCM output.

Though one can use cross validation to assess an ESD technique’s skill during the historical period, lacking observations of the future, there is no straightforward way to determine to what extent the ESD method’s skill might diminish when applied to future scenarios.

Regardless of the details of a specific ESD technique, there is an underlying assumption that transform functions computed during a training step are fully applicable to GCM simulations of the future, even though the climate itself is changing — this is what we refer to as the “stationarity assumption”. Our “perfect model” experimental design seeks to isolate and quantify key aspects of this stationarity assumption. As illustrated in the lower figure, our perfect model approach does not make use of observational data. Rather, we substitute high resolution model results for observations and we substitute coarsened (smoothed by interpolation) versions of the high resolution model output for what would be the GCM results in a more typical, real-world ESD application.

[Big Brother Experimental Design for Statistical Downscaling]

The datasets we use all derive from a set of high resolution GCM experiments — some were run to simulate the climate of recent decades and others simulate conditions at the end of the 21st century under high greenhouse gas emissions scenarios. Companion data sets were made by interpolating the high resolution (~25km) GCM output to a much coarser grid (~200km). During the ESD training step, statistical methods quantify relationships between the high resolution and coarse resolution data for the historical period. Then, using the coarsened data sets as input, we assess how well the transform functions deduced from the historical period can recover the high resolution GCM output, both for the historical period and for the late 21st century projections. Any degradation in skill computed for the future scenarios provides information as to how well the stationarity assumption holds.

 


  •  You may view a 30 minute video of Keith Dixon discussing this experimental design at the 2013 NCPP Quantitative Evaluation of Downscaling workshop.