Stakeholders need high-resolution urban climate information for city planning and adaptation to climate risks. Climate models have too coarse a spatial resolution to properly represent cities at the relevant scale, and downscaled products often fail to account for urban effects. We propose here a methodological framework for producing high-resolution urban databases that are used to drive the SURFEX-TEB land surface and urban canopy models. A historical simulation is carried out over the period 1991–2020, based on a reanalysis of the city of Philadelphia (Pennsylvania, USA). The simulation is compared with observations outside and inside the city, as well as with a field campaign. The results show good agreement between the model and observations, with average summer biases of only −1 °C and + 0.8 °C for daily minimum and maximum temperatures outside the city, and almost none inside. The simulation is used to calculate the maximum daily heat index (HIX) and to study emergency heat alerts. The HIX is slightly overestimated and, consequently, the model simulates too many heat events if not bias corrected. Overall, HIX conditions at Philadelphia International Airport are found to be suitable proxies for city-wide summer conditions, and therefore are appropriate to use for emergency heat declarations.
Wootten, Adrienne M., Keith W Dixon, Dennis Adams-Smith, and Renee A McPherson, March 2024: False springs and spring phenology: Propagating effects of downscaling technique and training data. International Journal of Climatology, 44(6), DOI:10.1002/joc.84382021-2040. Abstract
Projected changes to spring phenological indicators (such as first leaf and first bloom) are of importance to assessing the impacts of climate change on ecosystems and species. The risk of false springs (when a killing freeze occurs after plants of interest bloom), which can cause ecological and economic damage, is also projected to change across much of the United States. Given the coarse nature of global climate models, downscaled climate projections have commonly been used to assess local changes in spring phenological indices. Few studies that examine the influence of the sources of uncertainty sources in the downscaling approach on projections of phenological changes. This study examines the influence of sources of uncertainty on projections of spring phenological indicators and false spring risk using the South Central United States. The downscaled climate projections were created using three statistical downscaling techniques applied with three gridded observation datasets as training data and three global climate models. This study finds that projections of spring phenological indicators and false spring risk are primarily sensitive to the choice of global climate models. However, this study also finds that the formulation of the downscaling approach can cause errors representing the daily low-temperature distribution, which can cause errors in false spring risk by failing to capture the timing between the last spring freeze and the first bloom. One should carefully consider the downscaling approach used when using downscaled climate projections to assess changes to spring phenology and false spring risk.
Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes.
Wootten, Adrienne M., Keith W Dixon, Dennis Adams-Smith, and Renee A McPherson, February 2021: Statistically Downscaled Precipitation Sensitivity to Gridded Observation Data and Downscaling Technique. International Journal of Climatology, 41(2), DOI:10.1002/joc.6716980-1001. Abstract
Future climate projections illuminate our understanding of the climate system and generate data products often used in climate impact assessments. Statistical downscaling (SD) is commonly used to address biases in global climate models (GCM) and to translate large‐scale projected changes to the higher spatial resolutions desired for regional and local scale studies. However, downscaled climate projections are sensitive to method configuration and input data source choices made during the downscaling process that can affect a projection's ultimate suitability for particular impact assessments. Quantifying how changes in inputs or parameters affect SD‐generated projections of precipitation is critical for improving these datasets and their use by impacts researchers. Through analysis of a systematically designed set of 18 statistically downscaled future daily precipitation projections for the south‐central United States, this study aims to improve the guidance available to impacts researchers. Two statistical processing techniques are examined: a ratio delta downscaling technique and an equi‐ratio quantile mapping method. The projections are generated using as input results from three GCMs forced with representative concentration pathway (RCP) 8.5 and three gridded observation‐based data products. Sensitivity analyses identify differences in the values of precipitation variables among the projections and the underlying reasons for the differences. Results indicate that differences in how observational station data are converted to gridded daily observational products can markedly affect statistically downscaled future projections of wet‐day frequency, intensity of precipitation extremes, and the length of multi‐day wet and dry periods. The choice of downscaling technique also can affect the climate change signal for variables of interest, in some cases causing change signals to reverse sign. Hence, this study provides illustrations and explanations for some downscaled precipitation projection differences that users may encounter, as well as evidence of symptoms that can affect user decisions.
Statistical downscaling methods are extensively used to refine future climate change projections produced by physical models. Distributional methods, which are among the simplest to implement, are also among the most widely used, either by themselves or in conjunction with more complex approaches. Here, building off of earlier work we evaluate the performance of seven methods in this class that range widely in their degree of complexity. We employ daily maximum temperature over the Continental U. S. in a "Perfect Model" approach in which the output from a large‐scale dynamical model is used as a proxy for both observations and model output. Importantly, this experimental design allows one to estimate expected performance under a future high‐emissions climate‐change scenario.
We examine skill over the full distribution as well in the tails, seasonal variations in skill, and the ability to reproduce the climate change signal. Viewed broadly, there generally are modest overall differences in performance across the majority of the methods. However, the choice of philosophical paradigms used to define the downscaling algorithms divides the seven methods into two classes, of better vs. poorer overall performance. In particular, the bias‐correction plus change‐factor approach performs better overall than the bias‐correction only approach. Finally, we examine the performance of some special tail treatments that we introduced in earlier work which were based on extensions of a widely used existing scheme. We find that our tail treatments provide a further enhancement in downscaling extremes.
The cumulative distribution function transform (CDFt) downscaling method has been used widely to provide local‐scale information and bias correction to output from physical climate models. The CDFt approach is one from the category of statistical downscaling methods that operates via transformations between statistical distributions. Although numerous studies have demonstrated that such methods provide value overall, much less effort has focused on their performance with regard to values in the tails of distributions. We evaluate the performance of CDFt‐generated tail values based on four distinct approaches, two native to CDFt and two of our own creation, in the context of a "Perfect Model" setting in which global climate model output is used as a proxy for both observational and model data. We find that the native CDFt approaches can have sub‐optimal performance in the tails, particularly with regard to the maximum value. However, our alternative approaches provide substantial improvement.
Statistical downscaling is used widely to refine projections of future climate. Although generally successful, in some circumstances it can lead to highly erroneous results.
Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, it aims to provide finer grain detail and also to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical time period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “Perfect Model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations.
We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based only on available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products as well as the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced “Perfect Model” designs than the one we have employed.