GFDL - Geophysical Fluid Dynamics Laboratory


 

STATISTICAL DOWNSCALING

 


 

Introduction

Dating back to about 2005 I began doing background work in preparation for my entry into the realm of statistical downscaling, which came to fruition during the 2011-2012 time-frame. Although I had envisioned this as a project primarily of my own doing, a convergence of factors has (happily) led me to getting swept up into a much larger effort! We now have a Statistical Downscaling Project at GFDL, with my colleague Keith Dixon as the de facto leader.Other group members are Mary Jo Nath, Carolyn Whitlock, and Dennis-Adams Smith who joined us in June 2016. Here you can meet the ESD Team!

Goals & Philosophy

Practitioners of statistical downscaling are driven to a large extent by the need for answers to practical questions regarding the potential impacts of climate change. As a result, downscaling is often carried out in an ad-hoc fashion fueled by expediency. Typically, practitioners have neither the time nor the expertise to evaluate downscaling methods or to consider the consequences of various choices made in the application of downscaling.

This dilemma was articulated nicely in a recent publication With this as motivation, we hope to contribute to filling a void in the community regarding scrutiny of statistical downscaling techniques and practices. Among our overarching goals are:

  1. Subject downscaling techniques to rigorous quantitative evaluation
  2. Test sensitivities to choices made in application of downscaling techniques
  3. Improve downscaling techniques
  4. Provide guidance to end users in the application of techniques and interpretation of results

Statistical downscaling operates by deriving relationships between climate models and observations, with the intent of producing more localized information that is free of model biases. Often, downscaling relationships derived during the recent past are applied to model projections of future climate. However, the method will be degraded if the relationships from the past are not the same as those in the future. In real-world practice it is next to impossible to validate this crucial assumption of “statistical stationarity”, because we do not have “observations from the future”! Unfortunately the assumption of statistical stationarity underpins all statistical downscaling techniques.

One of the initial thrusts of our work focuses on assessing this assumption. To circumvent the lack of observations from decades into the future we operate in a “perfect model” world. This scheme can serve as an ideal test-bed, in which GCM output from historical and future climates are used as proxies for the real world. In addition, targeted use of “synthetic data” can aid in diagnosis of downscaling methods. You can view video presentations from the NCPP Workshop in August 2013 on the uses of Perfect Model (by Keith Dixon) and Synthetic Data (by John Lanzante).

Our preliminary results from this initial effort have already uncovered some unexpected quirks common in one class of statistical downscaling techniques. These involve both coastal as well as seasonal effects. We are currently performing additional diagnostics and will attempt to modify the downscaling method to mitigate these quirks. Our “Perfect Model” framework and some very initial results are reported in a recent paper.

Plans

Statistical downscaling will be the main focus of my research for the foreseeable future. While assessing “statistical stationarity” in downscaling methods is our first focus, I have a short wish list of other focal points that I’d like to pursue. I have a desire to work on improvements in handling the tails of distributions in downscaling, which will feed into the assessment of climate extremes. We would like to compare a number of different downscaling techniques within our framework. In addition, I have a much longer wish list, and am open to new ideas for collaboration, which should keep me occupied for years to come!


Return to John Lanzante’s Home Page