Stakeholders need high-resolution urban climate information for city planning and adaptation to climate risks. Climate models have too coarse a spatial resolution to properly represent cities at the relevant scale, and downscaled products often fail to account for urban effects. We propose here a methodological framework for producing high-resolution urban databases that are used to drive the SURFEX-TEB land surface and urban canopy models. A historical simulation is carried out over the period 1991–2020, based on a reanalysis of the city of Philadelphia (Pennsylvania, USA). The simulation is compared with observations outside and inside the city, as well as with a field campaign. The results show good agreement between the model and observations, with average summer biases of only −1 °C and + 0.8 °C for daily minimum and maximum temperatures outside the city, and almost none inside. The simulation is used to calculate the maximum daily heat index (HIX) and to study emergency heat alerts. The HIX is slightly overestimated and, consequently, the model simulates too many heat events if not bias corrected. Overall, HIX conditions at Philadelphia International Airport are found to be suitable proxies for city-wide summer conditions, and therefore are appropriate to use for emergency heat declarations.
Wootten, Adrienne M., Keith W Dixon, Dennis Adams-Smith, and Renee A McPherson, March 2024: False springs and spring phenology: Propagating effects of downscaling technique and training data. International Journal of Climatology, 44(6), DOI:10.1002/joc.84382021-2040. Abstract
Projected changes to spring phenological indicators (such as first leaf and first bloom) are of importance to assessing the impacts of climate change on ecosystems and species. The risk of false springs (when a killing freeze occurs after plants of interest bloom), which can cause ecological and economic damage, is also projected to change across much of the United States. Given the coarse nature of global climate models, downscaled climate projections have commonly been used to assess local changes in spring phenological indices. Few studies that examine the influence of the sources of uncertainty sources in the downscaling approach on projections of phenological changes. This study examines the influence of sources of uncertainty on projections of spring phenological indicators and false spring risk using the South Central United States. The downscaled climate projections were created using three statistical downscaling techniques applied with three gridded observation datasets as training data and three global climate models. This study finds that projections of spring phenological indicators and false spring risk are primarily sensitive to the choice of global climate models. However, this study also finds that the formulation of the downscaling approach can cause errors representing the daily low-temperature distribution, which can cause errors in false spring risk by failing to capture the timing between the last spring freeze and the first bloom. One should carefully consider the downscaling approach used when using downscaled climate projections to assess changes to spring phenology and false spring risk.
Tropical cyclone rapid intensification events often cause destructive hurricane landfalls because they are associated with the strongest storms and forecasts with the highest errors. Multi-decade observational datasets of tropical cyclone behavior have recently enabled documentation of upward trends in tropical cyclone rapid intensification in several basins. However, a robust anthropogenic signal in global intensification trends and the physical drivers of intensification trends have yet to be identified. To address these knowledge gaps, here we compare the observed trends in intensification and tropical cyclone environmental parameters to simulated natural variability in a high-resolution global climate model. In multiple basins and the global dataset, we detect a significant increase in intensification rates with a positive contribution from anthropogenic forcing. Furthermore, thermodynamic environments around tropical cyclones have become more favorable for intensification, and climate models show anthropogenic warming has significantly increased the probability of these changes.
Efforts to manage living marine resources (LMRs) under climate change need projections of future ocean conditions, yet most global climate models (GCMs) poorly represent critical coastal habitats. GCM utility for LMR applications will increase with higher spatial resolution but obstacles including computational and data storage costs, obstinate regional biases, and formulations prioritizing global robustness over regional skill will persist. Downscaling can help address GCM limitations, but significant improvements are needed to robustly support LMR science and management. We synthesize past ocean downscaling efforts to suggest a protocol to achieve this goal. The protocol emphasizes LMR-driven design to ensure delivery of decision-relevant information. It prioritizes ensembles of downscaled projections spanning the range of ocean futures with durations long enough to capture climate change signals. This demands judicious resolution refinement, with pragmatic consideration for LMR-essential ocean features superseding theoretical investigation. Statistical downscaling can complement dynamical approaches in building these ensembles. Inconsistent use of bias correction indicates a need for objective best practices. Application of the suggested protocol should yield regional ocean projections that, with effective dissemination and translation to decision-relevant analytics, can robustly support LMR science and management under climate change.
Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes.
Wootten, Adrienne M., Keith W Dixon, Dennis Adams-Smith, and Renee A McPherson, February 2021: Statistically Downscaled Precipitation Sensitivity to Gridded Observation Data and Downscaling Technique. International Journal of Climatology, 41(2), DOI:10.1002/joc.6716980-1001. Abstract
Future climate projections illuminate our understanding of the climate system and generate data products often used in climate impact assessments. Statistical downscaling (SD) is commonly used to address biases in global climate models (GCM) and to translate large‐scale projected changes to the higher spatial resolutions desired for regional and local scale studies. However, downscaled climate projections are sensitive to method configuration and input data source choices made during the downscaling process that can affect a projection's ultimate suitability for particular impact assessments. Quantifying how changes in inputs or parameters affect SD‐generated projections of precipitation is critical for improving these datasets and their use by impacts researchers. Through analysis of a systematically designed set of 18 statistically downscaled future daily precipitation projections for the south‐central United States, this study aims to improve the guidance available to impacts researchers. Two statistical processing techniques are examined: a ratio delta downscaling technique and an equi‐ratio quantile mapping method. The projections are generated using as input results from three GCMs forced with representative concentration pathway (RCP) 8.5 and three gridded observation‐based data products. Sensitivity analyses identify differences in the values of precipitation variables among the projections and the underlying reasons for the differences. Results indicate that differences in how observational station data are converted to gridded daily observational products can markedly affect statistically downscaled future projections of wet‐day frequency, intensity of precipitation extremes, and the length of multi‐day wet and dry periods. The choice of downscaling technique also can affect the climate change signal for variables of interest, in some cases causing change signals to reverse sign. Hence, this study provides illustrations and explanations for some downscaled precipitation projection differences that users may encounter, as well as evidence of symptoms that can affect user decisions.
Statistical downscaling methods are extensively used to refine future climate change projections produced by physical models. Distributional methods, which are among the simplest to implement, are also among the most widely used, either by themselves or in conjunction with more complex approaches. Here, building off of earlier work we evaluate the performance of seven methods in this class that range widely in their degree of complexity. We employ daily maximum temperature over the Continental U. S. in a "Perfect Model" approach in which the output from a large‐scale dynamical model is used as a proxy for both observations and model output. Importantly, this experimental design allows one to estimate expected performance under a future high‐emissions climate‐change scenario.
We examine skill over the full distribution as well in the tails, seasonal variations in skill, and the ability to reproduce the climate change signal. Viewed broadly, there generally are modest overall differences in performance across the majority of the methods. However, the choice of philosophical paradigms used to define the downscaling algorithms divides the seven methods into two classes, of better vs. poorer overall performance. In particular, the bias‐correction plus change‐factor approach performs better overall than the bias‐correction only approach. Finally, we examine the performance of some special tail treatments that we introduced in earlier work which were based on extensions of a widely used existing scheme. We find that our tail treatments provide a further enhancement in downscaling extremes.
Most present forecast systems for estuaries predict conditions for only a few days into the future. However, there are many reasons to expect that skillful estuarine forecasts are possible for longer time periods, including increasingly skillful extended atmospheric forecasts, the potential for lasting impacts of atmospheric forcing on estuarine conditions, and the predictability of tidal cycles. In this study, we test whether skillful estuarine forecasts are possible for up to 35 days into the future by combining an estuarine model of Chesapeake Bay with 35‐day atmospheric forecasts from an operational weather model. When compared with both a hindcast simulation from the same estuarine model and with observations, the estuarine forecasts for surface water temperature are skillful up to about two weeks into the future, and the forecasts for bottom temperature, surface and bottom salinity, and density stratification are skillful for all or the majority of the forecast period. Bottom oxygen forecasts are skillful when compared to the model hindcast, but not when compared with observations. We also find that skill for all variables in the estuary can be improved by taking the mean of multiple estuarine forecasts driven by an ensemble of atmospheric forecasts. Finally, we examine the forecasts in detail using two case studies of extreme events, and we discuss opportunities for improving the forecast skill.
Tropical cyclones that rapidly intensify are typically associated with the highest forecast errors and cause a disproportionate amount of human and financial losses. Therefore, it is crucial to understand if, and why, there are observed upward trends in tropical cyclone intensification rates. Here, we utilize two observational datasets to calculate 24-hour wind speed changes over the period 1982–2009. We compare the observed trends to natural variability in bias-corrected, high-resolution, global coupled model experiments that accurately simulate the climatological distribution of tropical cyclone intensification. Both observed datasets show significant increases in tropical cyclone intensification rates in the Atlantic basin that are highly unusual compared to model-based estimates of internal climate variations. Our results suggest a detectable increase of Atlantic intensification rates with a positive contribution from anthropogenic forcing and reveal a need for more reliable data before detecting a robust trend at the global scale.
The cumulative distribution function transform (CDFt) downscaling method has been used widely to provide local‐scale information and bias correction to output from physical climate models. The CDFt approach is one from the category of statistical downscaling methods that operates via transformations between statistical distributions. Although numerous studies have demonstrated that such methods provide value overall, much less effort has focused on their performance with regard to values in the tails of distributions. We evaluate the performance of CDFt‐generated tail values based on four distinct approaches, two native to CDFt and two of our own creation, in the context of a "Perfect Model" setting in which global climate model output is used as a proxy for both observational and model data. We find that the native CDFt approaches can have sub‐optimal performance in the tails, particularly with regard to the maximum value. However, our alternative approaches provide substantial improvement.
Responses of tropical cyclones (TCs) to CO2 doubling are explored using coupled global climate models (GCMs) with increasingly refined atmospheric/land horizontal grids (~ 200 km, ~ 50 km and ~ 25 km). The three models exhibit similar changes in background climate fields thought to regulate TC activity, such as relative sea surface temperature (SST), potential intensity, and wind shear. However, global TC frequency decreases substantially in the 50 km model, while the 25 km model shows no significant change. The ~ 25 km model also has a substantial and spatially-ubiquitous increase of Category 3–4–5 hurricanes. Idealized perturbation experiments are performed to understand the TC response. Each model’s transient fully-coupled 2 × CO2 TC activity response is largely recovered by “time-slice” experiments using time-invariant SST perturbations added to each model’s own SST climatology. The TC response to SST forcing depends on each model’s background climatological SST biases: removing these biases leads to a global TC intensity increase in the ~ 50 km model, and a global TC frequency increase in the ~ 25 km model, in response to CO2-induced warming patterns and CO2 doubling. Isolated CO2 doubling leads to a significant TC frequency decrease, while isolated uniform SST warming leads to a significant global TC frequency increase; the ~ 25 km model has a greater tendency for frequency increase. Global TC frequency responds to both (1) changes in TC “seeds”, which increase due to warming (more so in the ~ 25 km model) and decrease due to higher CO2 concentrations, and (2) less efficient development of these“seeds” into TCs, largely due to the nonlinear relation between temperature and saturation specific humidity.
Statistical downscaling is used widely to refine projections of future climate. Although generally successful, in some circumstances it can lead to highly erroneous results.
Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, it aims to provide finer grain detail and also to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical time period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “Perfect Model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations.
We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based only on available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products as well as the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced “Perfect Model” designs than the one we have employed.
Muhling, Barbara A., Carlos F Gaitán, Charles A Stock, Vincent S Saba, Desiree Tommasi, and Keith W Dixon, March 2018: Potential Salinity and Temperature Futures for the Chesapeake Bay Using a Statistical Downscaling Spatial Disaggregation Framework. Estuaries and Coasts, 41(2), DOI:10.1007/s12237-017-0280-8. Abstract
Estuaries are productive and ecologically important ecosystems, incorporating environmental drivers from watersheds, rivers, and the coastal ocean. Climate change has potential to modify the physical properties of estuaries, with impacts on resident organisms. However, projections from general circulation models (GCMs) are generally too coarse to resolve important estuarine processes. Here, we statistically downscaled near-surface air temperature and precipitation projections to the scale of the Chesapeake Bay watershed and estuary. These variables were linked to Susquehanna River streamflow using a water balance model and finally to spatially resolved Chesapeake Bay surface temperature and salinity using statistical model trees. The low computational cost of this approach allowed rapid assessment of projected changes from four GCMs spanning a range of potential futures under a high CO2 emission scenario, for four different downscaling methods. Choice of GCM contributed strongly to the spread in projections, but choice of downscaling method was also influential in the warmest models. Models projected a ~2–5.5 °C increase in surface water temperatures in the Chesapeake Bay by the end of the century. Projections of salinity were more uncertain and spatially complex. Models showing increases in winter-spring streamflow generated freshening in the Upper Bay and tributaries, while models with decreased streamflow produced salinity increases. Changes to the Chesapeake Bay environment have implications for fish and invertebrate habitats, as well as migration, spawning phenology, recruitment, and occurrence of pathogens. Our results underline a potentially expanded role of statistical downscaling to complement dynamical approaches in assessing climate change impacts in dynamically challenging estuaries.
Salas, E A L., V A Seamster, K G Boykin, N M Harings, and Keith W Dixon, March 2017: Modeling the impacts of climate change on Species of Concern (birds) in South Central U.S. based on bioclimatic variables. AIMS Environmental Science, 4(2), DOI:10.3934/environsci.2017.2.358. Abstract
We used 19 bioclimatic variables, five species distribution modeling (SDM) algorithms, four general circulation models, and two climate scenarios (2050 and 2070) to model nine bird species. Identified as Species of Concern (SOC), we highlighted these birds: Northern/Masked Bobwhite Quail (Colinus virginianus), Scaled Quail (Callipepla squamata), Pinyon Jay (Gymnorhinus cyanocephalus), Juniper Titmouse (Baeolophus ridgwayi), Mexican Spotted Owl (Strix occidentalis lucida), Cassin’s Sparrow (Peucaea cassinii), Lesser Prairie-Chicken (Tympanuchus pallidicinctus), Montezuma Quail (Cyrtonyx montezumae), and White-tailed Ptarmigan (Lagopus leucurus). The Generalized Linear Model, Random Forest, Boosted Regression Tree, Maxent, Multivariate Adaptive Regression Splines, and an ensemble model were used to identify present day core bioclimatic-envelopes for the species. We then projected future distributions of suitable climatic conditions for the species using data derived from four climate models run according to two greenhouse gas Representative Concentration Pathways (RCPs 2.6 and 8.5). Our models predicted changes in suitable bioclimatic-envelopes for all species for the years 2050 and 2070. Among the nine species of birds, the quails were found to be highly susceptible to climate change and appeared to be of most future conservation concern. The White-tailed Ptarmigan would lose about 62% of its suitable climatic habitat by 2050 and 67% by 2070. Among the species distribution models (SDMs), the Boosted Regression Tree model consistently performed fairly well based on Area Under the Curve (AUC range: 0.89 to 0.97) values. The ensemble models showed improved True Skill Statistics (all TSS values > 0.85) and Kappa Statistics (all K values > 0.80) for all species relative to the individual SDMs.
Salas, E A L., V A Seamster, N M Harings, K G Boykin, G Alvarez, and Keith W Dixon, August 2017: Projected Future Bioclimate-Envelope Suitability for Reptile and Amphibian Species of Concern in South Central USA. Herpetological Conservation and Biology, 12(2), 522-547. Abstract
Future climate change has impacts on the distribution of species. Using species distribution models
(SDM), we modeled the bioclimatic envelopes of four herpetofauna species in the South Central USA including
two salamanders, the Sacramento Mountain Salamander (Aneides hardii) and the Jemez Mountains Salamander
(Plethodon neomexicanus), one anuran, the Chiricahua Leopard Frog (Lithobates chiricahuensis), and one turtle,
the Rio Grande Cooter (Pseudemys gorzugi). We used Generalized Linear Model, Random Forest, Boosted
Regression Tree, Maxent, and Multivariate Adaptive Regression Splines, and binary ensembles to develop the
present day distributions of the species based on climate-driven models alone. We projected future distributions of
the species using data from four climate models run according to two greenhouse gas concentration pathways (RCP
2.6 and RCP 8.5). Our model results projected losses and gains in suitable bioclimatic envelopes for the years 2050
and 2070. The Boosted Regression Tree model consistently performed well among SDMs based on Area Under the
Curve (AUC; range = 0.88 to 0.97) values and kappa statistics (K > 0.75).
Empirical statistical downscaling (ESD) methods seek to refine global climate model (GCM) outputs via processes that glean information from a combination of observations and GCM simulations. They aim to create value-added climate projections by reducing biases and adding finer spatial detail. Analysis techniques, such as cross-validation, allow assessments of how well ESD methods meet these goals during observational periods. However, the extent to which an ESD method’s skill might differ when applied to future climate projections cannot be assessed readily in the same manner. Here we present a “perfect model” experimental design that quantifies aspects of ESD method performance for both historical and late 21st century time periods. The experimental design tests a key stationarity assumption inherent to ESD methods – namely, that ESD performance when applied to future projections is similar to that during the observational training period. Case study results employing a single ESD method (an Asynchronous Regional Regression Model variant) and climate variable (daily maximum temperature) demonstrate that violations of the stationarity assumption can vary geographically, seasonally, and with the amount of projected climate change. For the ESD method tested, the greatest challenges in downscaling daily maximum temperature projections are revealed to occur along coasts, in summer, and under conditions of greater projected warming. We conclude with a discussion of the potential use and expansion of the perfect model experimental design, both to inform the development of improved ESD methods and to provide guidance on the use of ESD products in climate impacts analyses and decision-support applications.
This study demonstrates skillful seasonal prediction of 2m air temperature and precipitation over land in a new high-resolution climate model developed by Geophysical Fluid Dynamics Laboratory, and explores the possible sources of the skill. We employ a statistical optimization approach to identify the most predictable components of seasonal mean temperature and precipitation over land, and demonstrate the predictive skill of these components. First, we show improved skill of the high-resolution model over the previous lower-resolution model in seasonal prediction of NINO3.4 index and other aspects of interest. Then we measure the skill of temperature and precipitation in the high-resolution model for boreal winter and summer, and diagnose the sources of the skill. Lastly, we reconstruct predictions using a few most predictable components to yield more skillful predictions than the raw model predictions. Over three decades of hindcasts, we find that the two most predictable components of temperature are characterized by a component that is likely due to changes in external radiative forcing in boreal winter and summer, and an ENSO-related pattern in boreal winter. The most predictable components of precipitation in both seasons are very likely ENSO-related. These components of temperature and precipitation can be predicted with significant correlation skill at least 9 months in advance. The reconstructed predictions using only the first few predictable components from the model show considerably better skill relative to observations than raw model predictions. This study shows that the use of refined statistical analysis and a high-resolution dynamical model leads to significant skill in seasonal predictions of 2m air temperature and precipitation over land.
Decadal prediction experiments were conducted as part of CMIP5 using the GFDL-CM2.1 forecast system. The abrupt warming of the North Atlantic subpolar gyre (SPG) that was observed in the mid 1990s is considered as a case study to evaluate our forecast capabilities and better understand the reasons for the observed changes. Initializing the CM2.1 coupled system produces high skill in retrospectively predicting the mid-90s shift, which is not captured by the uninitialized forecasts. All the hindcasts initialized in the early 90s show a warming of the SPG, however, only the ensemble mean hindcasts initialized in 1995 and 1996 are able to reproduce the observed abrupt warming and the associated decrease and contraction of the SPG. Examination of the physical mechanisms responsible for the successful retrospective predictions indicates that initializing the ocean is key to predict the mid 90s warming. The successful initialized forecasts show an increased Atlantic Meridional Overturning Circulation and North Atlantic current transport, which drive an increased advection of warm saline subtropical waters northward, leading to a westward shift of the subpolar front and subsequently a warming and spin down of the SPG. Significant seasonal climate impacts are predicted as the SPG warms, including a reduced sea-ice concentration over the Arctic, an enhanced warming over central US during summer and fall, and a northward shift of the mean ITCZ. These climate anomalies are similar to those observed during a warm phase of the Atlantic Multidecadal Oscillation, which is encouraging for future predictions of North Atlantic climate.
In our original paper (Vecchi et al., 2013, hereafter V13) we stated “the skill in the initialized forecasts comes in large part from the persistence of the mid-1990s shift by the initialized forecasts, rather than from predicting its evolution”. Smith et al (2013, hereafter S13) challenge that assertion, contending that DePreSys was able to make a successful retrospective forecast of that shift. We stand by our original assertion, and present additional analyses using output from DePreSys retrospective forecasts to support our assessment.
Tropical cyclones (TCs) are a hazard to life and property and a prominent element of the global climate system, therefore understanding and predicting TC location, intensity and frequency is of both societal and scientific significance. Methodologies exist to predict basin-wide, seasonally-aggregated TC activity months, seasons and even years in advance. We show that a newly developed high-resolution global climate model can produce skillful forecasts of seasonal TC activity on spatial scales finer than basin-wide, from months and seasons in advance of the TC season. The climate model used here is targeted at predicting regional climate and the statistics of weather extremes on seasonal to decadal timescales, and is comprised of high-resolution (50km×50km) atmosphere and land components, and more moderate resolution (~100km) sea ice and ocean components. The simulation of TC climatology and interannual variations in this climate model is substantially improved by correcting systematic ocean biases through “flux-adjustment.” We perform a suite of 12-month duration retrospective forecasts over the 1981-2012 period, after initializing the climate model to observationally-constrained conditions at the start of each forecast period – using both the standard and flux-adjusted versions of the model. The standard and flux-adjusted forecasts exhibit equivalent skill at predicting Northern Hemisphere TC season sea surface temperature, but the flux-adjusted model exhibits substantially improved basin-wide and regional TC activity forecasts, highlighting the role of systematic biases in limiting the quality of TC forecasts. These results suggest that dynamical forecasts of seasonally-aggregated regional TC activity months in advance are feasible.
Barsugli, Joseph J., and Keith W Dixon, et al., November 2013: The Practitioner’s Dilemma: How to Assess the Credibility of Downscaled Climate Projections. EOS, 94(46), DOI:10.1002/2013EO460005. Abstract
Suppose you are a city planner, regional water manager, or wildlife conservation specialist who is asked to include the potential impacts of climate variability and change in your risk management and planning efforts. What climate information would you use? The choice is often regional or local climate projections downscaled from global climate models (GCMs; also known as general circulation models) to include detail at spatial and temporal scales that align with those of the decision problem. A few years ago this information was hard to come by. Now there is Web-based access to a proliferation of high-resolution climate projections derived with differing downscaling methods.
Using simulations performed with 18 coupled atmosphere-ocean global climate models from the CMIP5 project, projections of Northern Hemisphere snowfall under the RCP4.5 scenario are analyzed for the period 2006-2100. These models perform well in simulating 20th century snowfall, although there is a positive bias in many regions. Annual snowfall is projected to decrease across much of the Northern Hemisphere during the 21st century, with increases projected at higher latitudes. On a seasonal basis, the transition zone between negative and positive snowfall trends corresponds approximately to the -10 °C isotherm of the late 20th century mean surface air temperature such that positive trends prevail in winter over large regions of Eurasia and North America. Redistributions of snowfall throughout the entire snow season are projected to occur – even in locations where there is little change in annual snowfall. Changes in the fraction of precipitation falling as snow contribute to decreases in snowfall across most Northern Hemisphere regions, while changes in total precipitation typically contribute to increases in snowfall. A signal-to-noise analysis reveals that the projected changes in snowfall, based on the RCP4.5 scenario, are likely to become apparent during the 21st century for most locations in the Northern Hemisphere. The snowfall signal emerges more slowly than the temperature signal, suggesting that changes in snowfall are not likely to be early indicators of regional climate change.
The impact of climate warming on the upper layer of the Bering Sea is investigated by using a high-resolution coupled global climate model. The model is forced by increasing atmospheric CO2 at a rate of 1% per year until CO2 reaches double its initial value (after 70 years), after which it is held constant. In response to this forcing, the upper layer of the Bering Sea warms by about 2�C in the southeastern shelf and by a little more than 1�C in the western basin. The wintertime ventilation to the permanent thermocline weakens in the western Bering Sea. After CO2 doubling, the southeastern shelf of the Bering Sea becomes almost ice-free in March, and the stratification of the upper layer strengthens in May and June. Changes of physical condition due to the climate warming would impact the pre-condition of spring bio-productivity in the southeastern shelf.
Retrospective predictions of multi-year North Atlantic hurricane frequency are explored, by applying a hybrid statistical-dynamical forecast system to initialized and non-initialized multi-year forecasts of tropical Atlantic and tropical mean sea surface temperatures (SSTs) from two global climate model forecast systems. By accounting for impacts of initialization and radiative forcing, retrospective predictions of five-year mean and nine-year mean tropical Atlantic hurricane frequency show significant correlation relative to a null hypothesis of zero correlation. The retrospective correlations are increased in a two-model average forecast and by using a lagged-ensemble approach, with the two-model ensemble decadal forecasts hurricane frequency over 1961-2011 yielding correlation coefficients that approach 0.9.
These encouraging retrospective multi-year hurricane predictions, however, should be interpreted with care: although initialized forecasts have higher nominal skill than uninitialized ones, the relatively short record and large autocorrelation of the time series limits our confidence in distinguishing between the skill due to external forcing and that added by initialization. The nominal increase in correlation in the initialized forecasts relative to the uninitialized experiments is due to improved representation of the multi-year tropical Atlantic SST anomalies. The skill in the initialized forecasts comes in large part from the persistence of a mid-1990s shift by the initialized forecasts, rather than from predicting its evolution. Predicting shifts like that observed in 1994-1995 remains a critical issue for the success of multi-year forecasts of Atlantic hurricane frequency. The retrospective forecasts highlight the possibility that changes in observing system impact forecast performance.
The decadal predictability of sea surface temperature (SST) and 2m air temperature (T2m) in Geophysical Fluid Dynamics Laboratory (GFDL)'s decadal hindcasts, which are part of the Fifth Coupled Model Intercomparison Project experiments, has been investigated using an average predictability time (APT) analysis. Comparison of retrospective forecasts initialized using the GFDL's Ensemble Coupled Data Assimilation system with uninitialized historical forcing simulations using the same model, allows identification of internal multidecadal pattern (IMP) for SST and T2m. The IMP of SST is characterized by an inter-hemisphere dipole, with warm anomalies centered in the North Atlantic subpolar gyre region and North Pacific subpolar gyre region, and cold anomalies centered in the Antarctic Circumpolar Current region. The IMP of T2m is characterized by a general bi-polar seesaw, with warm anomalies centered in Greenland, and cold anomalies centered in Antarctica. The retrospective prediction skill of the initialized system, verified against independent observations, indicates that the IMP of SST may be predictable up to 4 (10) year lead time at 95% (90%) significance level, and the IMP of T2m may be predictable up to 2 (10) years at 95% (90%) significance level. The initialization of multidecadal variations of northward oceanic heat transport in the North Atlantic significantly improves the predictive skill of the IMP. The dominant roles of oceanic internal dynamics in decadal prediction are further elucidated by fixed-forcing experiments, in which radiative forcing is returned to 1961 values. These results point towards the possibility of meaningful decadal climate outlooks using dynamical coupled models, if they are appropriately initialized from a sustained climate observing system.
Identifying the prime drivers of the twentieth-century multidecadal variability in the Atlantic Ocean is crucial for predicting how the Atlantic will evolve in the coming decades and the resulting broad impacts on weather and precipitation patterns around the globe. Recently Booth et al (2012) showed that the HadGEM2-ES climate model closely reproduces the observed multidecadal variations of area-averaged North Atlantic sea surface temperature in the 20th century. The multidecadal variations simulated in HadGEM2-ES are primarily driven by aerosol indirect effects that modify net surface shortwave radiation. On the basis of these results, Booth et al (2012) concluded that aerosols are a prime driver of twentieth-century North Atlantic climate variability. However, here it is shown that there are major discrepancies between the HadGEM2-ES simulations and observations in the North Atlantic upper ocean heat content, in the spatial pattern of multidecadal SST changes within and outside the North Atlantic, and in the subpolar North Atlantic sea surface salinity. These discrepancies may be strongly influenced by, and indeed in large part caused by, aerosol effects. It is also shown that the aerosol effects simulated in HadGEM2-ES cannot account for the observed anti-correlation between detrended multidecadal surface and subsurface temperature variations in the tropical North Atlantic. These discrepancies cast considerable doubt on the claim that aerosol forcing drives the bulk of this multidecadal variability.
We present results for simulated climate and climate change from a newly developed high-resolution global climate model (GFDL CM2.5). The GFDL CM2.5 model has an atmospheric resolution of approximately 50 Km in the horizontal, with 32 vertical levels. The horizontal resolution in the ocean ranges from 28 Km in the tropics to 8 Km at high latitudes, with 50 vertical levels. This resolution allows the explicit simulation of some mesoscale eddies in the ocean, particularly at lower latitudes.
We present analyses based on the output of a 280 year control simulation; we also present results based on a 140 year simulation in which atmospheric CO2 increases at 1% per year until doubling after 70 years.
Results are compared to the GFDL CM2.1 climate model, which has somewhat similar physics but coarser resolution. The simulated climate in CM2.5 shows marked improvement over many regions, especially the tropics, including a reduction in the double ITCZ and an improved simulation of ENSO. Regional precipitation features are much improved. The Indian monsoon and Amazonian rainfall are also substantially more realistic in CM2.5.
The response of CM2.5 to a doubling of atmospheric CO2 has many features in common with CM2.1, with some notable differences. For example, rainfall changes over the Mediterranean appear to be tightly linked to topography in CM2.5, in contrast to CM2.1 where the response is more spatially homogeneous. In addition, in CM2.5 the near-surface ocean warms substantially in the high latitudes of the Southern Ocean, in contrast to simulations using CM2.1.
Matei et al. (Reports, 6 January 2012, p. 76) claim to show skillful multiyear predictions of the
Atlantic Meridional Overturning Circulation (AMOC). However, these claims are not justified,
primarily because the predictions of AMOC transport do not outperform simple reference forecasts
based on climatological annual cycles. Accordingly, there is no justification for the "confident"
prediction of a stable AMOC through 2014.
The sensitivity of the North Atlantic Ocean Circulation to an abrupt change in the Nordic Sea overflow is investigated for the first time using a high resolution eddy-permitting global coupled ocean-atmosphere model (GFDL CM2.5). The Nordic Sea overflow is perturbed through the change of the bathymetry in GFDL CM2.5. We analyze the Atlantic Meridional Overturning Circulation (AMOC) adjustment process and the downstream oceanic response to the perturbation. The results suggest that north of 34N, AMOC changes induced by changes in the Nordic Sea overflow propagate on the slow tracer advection time scale, instead of the fast Kelvin wave time scale, resulting in a time lead of several years between subpolar and subtropical AMOC changes. The results also show that a stronger and deeper-penetrating Nordic Sea overflow leads to stronger and deeper AMOC, stronger northward ocean heat transport, reduced Labrador Sea deep convection, stronger cyclonic Northern Recirculation Gyre (NRG), westward shift of the North Atlantic Current (NAC) and southward shift of the Gulf Stream, warmer sea surface temperature (SST) east of Newfoundland and colder SST south of the Grand Banks, stronger and deeper NAC and Gulf Stream, and stronger oceanic eddy activities along the NAC and the Gulf Stream paths. A stronger/weaker Nordic Sea overflow also leads to a contracted/expanded subpolar gyre (SPG). This sensitivity study points to the important role of the Nordic Sea overflow in the large scale North Atlantic ocean circulation, and it is crucial for climate models to have a correct representation of the Nordic Sea overflow.
The North Atlantic is among the few places where decadal climate variations are considered potentially predictable. The physical mechanisms of the decadal variability are hypothesized to be associated with fluctuations of the Atlantic meridional overturning circulation (AMOC). Perfect model predictability experiments using the GFDL CM2.1 climate model are analyzed to investigate the potential predictability of the AMOC. Results indicate that the AMOC is predictable up to 20 years. We further connect AMOC predictability to readily observable fields. We show that modeled surface and subsurface signatures of AMOC variations defined by characteristic patterns of sea surface height, subsurface temperature, and upper ocean heat content anomalies, have a potential predictability similar to the AMOC's. Since we have longer observational records for these quantities than for direct measurements of the AMOC, our study highlights a potentially new promising method for monitoring AMOC variations, and hence assessing the predictability of the real climate system.
A new field of study, “decadal prediction,” is emerging in climate science. Decadal prediction lies between seasonal/interannual forecasting and longer-term climate change projections, and focuses on time-evolving regional climate conditions over the next 10–30 yr. Numerous assessments of climate information user needs have identified this time scale as being important to infrastructure planners, water resource managers, and many others. It is central to the information portfolio required to adapt effectively to and through climatic changes. At least three factors influence time-evolving regional climate at the decadal time scale: 1) climate change commitment (further warming as the coupled climate system comes into adjustment with increases of greenhouse gases that have already occurred), 2) external forcing, particularly from future increases of greenhouse gases and recovery of the ozone hole, and 3) internally generated variability. Some decadal prediction skill has been demonstrated to arise from the first two of these factors, and there is evidence that initialized coupled climate models can capture mechanisms of internally generated decadal climate variations, thus increasing predictive skill globally and particularly regionally. Several methods have been proposed for initializing global coupled climate models for decadal predictions, all of which involve global time-evolving three-dimensional ocean data, including temperature and salinity. An experimental framework to address decadal predictability/prediction is described in this paper and has been incorporated into the coordinated Coupled Model Intercomparison Model, phase 5 (CMIP5) experiments, some of which will be assessed for the IPCC Fifth Assessment Report (AR5). These experiments will likely guide work in this emerging field over the next 5 yr.
In many climate model simulations using realistic, time-varying climate change forcing agents for the 20th and 21st centuries, the North Atlantic thermohaline circulation (THC) weakens in the 21st century, with little change in the 20th century. Here we use a comprehensive climate model to explore the impact of various climate change forcing agents on the THC. We conduct ensembles of integrations with subsets of climate change forcing agents. Increasing greenhouse gases – in isolation – produce a significant THC weakening in the late 20th century, but this change is partially offset by increasing anthropogenic aerosols, which tend to strengthen the THC. The competition between increasing greenhouse gases and anthropogenic aerosols thus produces no significant THC change in our 20th century simulations when all climate forcings are included. The THC weakening becomes significant several decades into the 21st century, when the effects of increasing greenhouse gases overwhelm the aerosol effects.
The formulation and simulation characteristics of two new global coupled climate models developed at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) are described. The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved.
Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2° latitude × 2.5° longitude; the atmospheric model has 24 vertical levels. The ocean resolution is 1° in latitude and longitude, with meridional resolution equatorward of 30° becoming progressively finer, such that the meridional resolution is 1/3° at the equator. There are 50 vertical levels in the ocean, with 22 evenly spaced levels within the top 220 m. The ocean component has poles over North America and Eurasia to avoid polar filtering. Neither coupled model employs flux adjustments.
The control simulations have stable, realistic climates when integrated over multiple centuries. Both models have simulations of ENSO that are substantially improved relative to previous GFDL coupled models. The CM2.0 model has been further evaluated as an ENSO forecast model and has good skill (CM2.1 has not been evaluated as an ENSO forecast model). Generally reduced temperature and salinity biases exist in CM2.1 relative to CM2.0. These reductions are associated with 1) improved simulations of surface wind stress in CM2.1 and associated changes in oceanic gyre circulations; 2) changes in cloud tuning and the land model, both of which act to increase the net surface shortwave radiation in CM2.1, thereby reducing an overall cold bias present in CM2.0; and 3) a reduction of ocean lateral viscosity in the extratropics in CM2.1, which reduces sea ice biases in the North Atlantic.
Both models have been used to conduct a suite of climate change simulations for the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment report and are able to simulate the main features of the observed warming of the twentieth century. The climate sensitivities of the CM2.0 and CM2.1 models are 2.9 and 3.4 K, respectively. These sensitivities are defined by coupling the atmospheric components of CM2.0 and CM2.1 to a slab ocean model and allowing the model to come into equilibrium with a doubling of atmospheric CO2. The output from a suite of integrations conducted with these models is freely available online (see http://nomads.gfdl.noaa.gov/).
Manuscript received 8 December 2004, in final form 18 March 2005
The current generation of coupled climate models run at the Geophysical Fluid Dynamics Laboratory (GFDL) as part of the Climate Change Science Program contains ocean components that differ in almost every respect from those contained in previous generations of GFDL climate models. This paper summarizes the new physical features of the models and examines the simulations that they produce. Of the two new coupled climate model versions 2.1 (CM2.1) and 2.0 (CM2.0), the CM2.1 model represents a major improvement over CM2.0 in most of the major oceanic features examined, with strikingly lower drifts in hydrographic fields such as temperature and salinity, more realistic ventilation of the deep ocean, and currents that are closer to their observed values. Regional analysis of the differences between the models highlights the importance of wind stress in determining the circulation, particularly in the Southern Ocean. At present, major errors in both models are associated with Northern Hemisphere Mode Waters and outflows from overflows, particularly the Mediterranean Sea and Red Sea.
Historical climate simulations of the period 1861–2000 using two new Geophysical Fluid Dynamics Laboratory (GFDL) global climate models (CM2.0 and CM2.1) are compared with observed surface temperatures. All-forcing runs include the effects of changes in well-mixed greenhouse gases, ozone, sulfates, black and organic carbon, volcanic aerosols, solar flux, and land cover. Indirect effects of tropospheric aerosols on clouds and precipitation processes are not included. Ensembles of size 3 (CM2.0) and 5 (CM2.1) with all forcings are analyzed, along with smaller ensembles of natural-only and anthropogenic-only forcing, and multicentury control runs with no external forcing.
Observed warming trends on the global scale and in many regions are simulated more realistically in the all-forcing and anthropogenic-only forcing runs than in experiments using natural-only forcing or no external forcing. In the all-forcing and anthropogenic-only forcing runs, the model shows some tendency for too much twentieth-century warming in lower latitudes and too little warming in higher latitudes. Differences in Arctic Oscillation behavior between models and observations contribute substantially to an underprediction of the observed warming over northern Asia. In the all-forcing and natural-only forcing runs, a temporary global cooling in the models during the 1880s not evident in the observed temperature records is volcanically forced. El Niño interactions complicate comparisons of observed and simulated temperature records for the El Chichón and Mt. Pinatubo eruptions during the early 1980s and early 1990s.
The simulations support previous findings that twentieth-century global warming has resulted from a combination of natural and anthropogenic forcing, with anthropogenic forcing being the dominant cause of the pronounced late-twentieth-century warming. The regional results provide evidence for an emergent anthropogenic warming signal over many, if not most, regions of the globe. The warming signal has emerged rather monotonically in the Indian Ocean/western Pacific warm pool during the past half-century. The tropical and subtropical North Atlantic and the tropical eastern Pacific are examples of regions where the anthropogenic warming signal now appears to be emerging from a background of more substantial multidecadal variability.
A coupled climate model with poleward-intensified westerly winds simulates significantly higher storage of heat and anthropogenic carbon dioxide by the Southern Ocean in the future when compared with the storage in a model with initially weaker, equatorward-biased westerlies. This difference results from the larger outcrop area of the dense waters around Antarctica and more vigorous divergence, which remains robust even as rising atmospheric greenhouse gas levels induce warming that reduces the density of surface waters in the Southern Ocean. These results imply that the impact of warming on the stratification of the global ocean may be reduced by the poleward intensification of the westerlies, allowing the ocean to remove additional heat and anthropogenic carbon dioxide from the atmosphere.
Russell, Joellen L., Ronald J Stouffer, and Keith W Dixon, September 2006: Intercomparison of the Southern Ocean Circulations in IPCC Coupled Model Control Simulations. Journal of Climate, 19(18), DOI:10.1175/JCLI3869.1. Abstract
The analyses presented here focus on the Southern Ocean as simulated in a set of global coupled climate model control experiments conducted by several international climate modeling groups. Dominated by the Antarctic Circumpolar Current (ACC), the vast Southern Ocean can influence large-scale surface climate features on various time scales. Its climatic relevance stems in part from it being the region where most of the transformation of the World Ocean’s water masses occurs. In climate change experiments that simulate greenhouse gas–induced warming, Southern Ocean air–sea heat fluxes and three-dimensional circulation patterns make it a region where much of the future oceanic heat storage takes place, though the magnitude of that heat storage is one of the larger sources of uncertainty associated with the transient climate response in such model projections. Factors such as the Southern Ocean’s wind forcing, heat, and salt budgets are linked to the structure and transport of the ACC in ways that have not been expressed clearly in the literature. These links are explored here in a coupled model context by analyzing a sizable suite of preindustrial control experiments associated with the forthcoming Intergovernmental Panel on Climate Change’s Fourth Assessment Report. A framework is developed that uses measures of coupled model simulation characteristics, primarily those related to the Southern Ocean wind forcing and water mass properties, to allow one to categorize, and to some extent predict, which models do better or worse at simulating the Southern Ocean and why. Hopefully, this framework will also lead to increased understanding of the ocean’s response to climate changes.
Russell, Joellen L., Keith W Dixon, Anand Gnanadesikan, J R Toggweiler, and Ronald J Stouffer, August 2006: The once and future battles between Thor and the Midgard Serpent: The Southern Hemisphere Westerlies and the Antarctic Circumpolar Current. Geochimica et Cosmochimica Acta, 70(18 Supp 1), DOI:10.1016/j.gca.2006.06.1010. PDF
The climate response to idealized changes in the atmospheric CO2 concentration by the new GFDL climate model (CM2) is documented. This new model is very different from earlier GFDL models in its parameterizations of subgrid-scale physical processes, numerical algorithms, and resolution. The model was constructed to be useful for both seasonal-to-interannual predictions and climate change research. Unlike previous versions of the global coupled GFDL climate models, CM2 does not use flux adjustments to maintain a stable control climate. Results from two model versions, Climate Model versions 2.0 (CM2.0) and 2.1 (CM2.1), are presented.
Two atmosphere–mixed layer ocean or slab models, Slab Model versions 2.0 (SM2.0) and 2.1 (SM2.1), are constructed corresponding to CM2.0 and CM2.1. Using the SM2 models to estimate the climate sensitivity, it is found that the equilibrium globally averaged surface air temperature increases 2.9 (SM2.0) and 3.4 K (SM2.1) for a doubling of the atmospheric CO2 concentration. When forced by a 1% per year CO2 increase, the surface air temperature difference around the time of CO2 doubling [transient climate response (TCR)] is about 1.6 K for both coupled model versions (CM2.0 and CM2.1). The simulated warming is near the median of the responses documented for the climate models used in the 2001 Intergovernmental Panel on Climate Change (IPCC) Working Group I Third Assessment Report (TAR).
The thermohaline circulation (THC) weakened in response to increasing atmospheric CO2. By the time of CO2 doubling, the weakening in CM2.1 is larger than that found in CM2.0: 7 and 4 Sv (1 Sv 106 m3 s−1), respectively. However, the THC in the control integration of CM2.1 is stronger than in CM2.0, so that the percentage change in the THC between the two versions is more similar. The average THC change for the models presented in the TAR is about 3 or 4 Sv; however, the range across the model results is very large, varying from a slight increase (+2 Sv) to a large decrease (−10 Sv).
Stouffer, Ronald J., Keith W Dixon, Michael J Spelman, William J Hurlin, Jianjun Yin, Jonathan M Gregory, A J Weaver, M Eby, G M Flato, D Y Robitaille, H Hasumi, A Oka, Aixue Hu, J H Jungclaus, I V Kamenkovich, A Levermann, M Montoya, S Murakami, S Nawrath, W R Peltier, G Vettoretti, A P Sokolov, and S L Weber, 2006: Investigating the Causes of the Response of the Thermohaline Circulation to Past and Future Climate Changes. Journal of Climate, 19(8), DOI:10.1175/JCLI3689.11. Abstract
The Atlantic thermohaline circulation (THC) is an important part of the earth's climate system. Previous research has shown large uncertainties in simulating future changes in this critical system. The simulated THC response to idealized freshwater perturbations and the associated climate changes have been intercompared as an activity of World Climate Research Program (WCRP) Coupled Model Intercomparison Project/Paleo-Modeling Intercomparison Project (CMIP/PMIP) committees. This intercomparison among models ranging from the earth system models of intermediate complexity (EMICs) to the fully coupled atmosphere–ocean general circulation models (AOGCMs) seeks to document and improve understanding of the causes of the wide variations in the modeled THC response. The robustness of particular simulation features has been evaluated across the model results. In response to 0.1-Sv (1 Sv 106 m3 s−1) freshwater input in the northern North Atlantic, the multimodel ensemble mean THC weakens by 30% after 100 yr. All models simulate some weakening of the THC, but no model simulates a complete shutdown of the THC. The multimodel ensemble indicates that the surface air temperature could present a complex anomaly pattern with cooling south of Greenland and warming over the Barents and Nordic Seas. The Atlantic ITCZ tends to shift southward. In response to 1.0-Sv freshwater input, the THC switches off rapidly in all model simulations. A large cooling occurs over the North Atlantic. The annual mean Atlantic ITCZ moves into the Southern Hemisphere. Models disagree in terms of the reversibility of the THC after its shutdown. In general, the EMICs and AOGCMs obtain similar THC responses and climate changes with more pronounced and sharper patterns in the AOGCMs.
Gregory, Jonathan M., Keith W Dixon, Ronald J Stouffer, A J Weaver, E Driesschaert, M Eby, T Fichefet, H Hasumi, Aixue Hu, J H Jungclaus, I V Kamenkovich, A Levermann, M Montoya, S Murakami, S Nawrath, A Oka, A P Sokolov, and R B Thorpe, 2005: A model intercomparison of changes in the Atlantic thermohaline circulation in response to increasing atmospheric CO2 concentration. Geophysical Research Letters, 32, L12703, DOI:10.1029/2005GL023209. Abstract
As part of the Coupled Model Intercomparison Project, integrations with a common design have been undertaken with eleven different climate models to compare the response of the Atlantic thermohaline circulation (THC) to time-dependent climate change caused by increasing atmospheric CO2 concentration. Over 140 years, during which the CO2 concentration quadruples, the circulation strength declines gradually in all models, by between 10 and 50%. No model shows a rapid or complete collapse, despite the fairly rapid increase and high final concentration of CO2. The models having the strongest overturning in the control climate tend to show the largest THC reductions. In all models, the THC weakening is caused more by changes in surface heat flux than by changes in surface water flux. No model shows a cooling anywhere, because the greenhouse warming is dominant.
This paper summarizes the formulation of the ocean component to the Geophysical Fluid Dynamics Laboratory's (GFDL) climate model used for the 4th IPCC Assessment (AR4) of global climate change. In particular, it reviews the numerical schemes and physical parameterizations that make up an ocean climate model and how these schemes are pieced together for use in a state-of-the-art climate model. Features of the model described here include the following: (1) tripolar grid to resolve the Arctic Ocean without polar filtering, (2) partial bottom step representation of topography to better represent topographically influenced advective and wave processes, (3) more accurate equation of state, (4) three-dimensional flux limited tracer advection to reduce overshoots and undershoots, (5) incorporation of regional climatological variability in shortwave penetration, (6) neutral physics parameterization for representation of the pathways of tracer transport, (7) staggered time stepping for tracer conservation and numerical efficiency, (8) anisotropic horizontal viscosities for representation of equatorial currents, (9) parameterization of exchange with marginal seas, (10) incorporation of a free surface that accommodates a dynamic ice model and wave propagation, (11) transport of water across the ocean free surface to eliminate unphysical "virtual tracer flux" methods, (12) parameterization of tidal mixing on continental shelves. We also present preliminary analyses of two particularly important sensitivities isolated during the development process, namely the details of how parameterized subgridscale eddies transport momentum and tracers.
Santer, B D., T M L Wigley, C Mears, F J Wentz, Stephen A Klein, D J Seidel, Karl E Taylor, P W Thorne, Michael F Wehner, Peter J Gleckler, J S Boyle, William D Collins, Keith W Dixon, Charles Doutriaux, M Free, Qiang Fu, J E Hansen, G S Jones, R Ruedy, T R Karl, John R Lanzante, Gerald A Meehl, V Ramaswamy, G Russell, and Gavin A Schmidt, 2005: Amplification of surface temperature trends and variability in the tropical atmosphere. Science, 309(5740), DOI:10.1126/science.1114867. Abstract
The month-to-month variability of tropical temperatures is larger in the troposphere than at Earth's surface. This amplification behavior is similar in a range of observations and climate model simulations and is consistent with basic theory. On multidecadal time scales, tropospheric amplification of surface warming is a robust feature of model simulations, but it occurs in only one observational data set. Other observations show weak, or even negative, amplification. These results suggest either that different physical mechanisms control amplification processes on monthly and decadal time scales, and models fail to capture such behavior; or (more plausibly) that residual errors in several observational data sets used here affect their representation of long-term trends.
for climate research developed at the Geophysical Fluid Dynamics Laboratory (GFDL) are presented. The atmosphere model, known as AM2, includes a new gridpoint dynamical core, a prognostic cloud scheme, and a multispecies aerosol climatology, as well as components from previous models used at GFDL. The land model, known as LM2, includes soil sensible and latent heat storage, groundwater storage, and stomatal resistance. The performance of the coupled model AM2–LM2 is evaluated with a series of prescribed sea surface temperature (SST) simulations. Particular focus is given to the model's climatology and the characteristics of interannual variability related to E1 Niño– Southern Oscillation (ENSO).
One AM2–LM2 integration was performed according to the prescriptions of the second Atmospheric Model Intercomparison Project (AMIP II) and data were submitted to the Program for Climate Model Diagnosis and Intercomparison (PCMDI). Particular strengths of AM2–LM2, as judged by comparison to other models participating in AMIP II, include its circulation and distributions of precipitation. Prominent problems of AM2– LM2 include a cold bias to surface and tropospheric temperatures, weak tropical cyclone activity, and weak tropical intraseasonal activity associated with the Madden–Julian oscillation.
An ensemble of 10 AM2–LM2 integrations with observed SSTs for the second half of the twentieth century permits a statistically reliable assessment of the model's response to ENSO. In general, AM2–LM2 produces a realistic simulation of the anomalies in tropical precipitation and extratropical circulation that are associated with ENSO.
We present results from a series of ensemble integrations of a global coupled atmosphere-ocean model for the period 1865-1997. Each ensemble consists of three integrations initialized from different points in a long-running GFDL R30 coupled model control simulation. The first ensemble includes time-varying forcing from greenhouse gases only. In the remaining three ensembles, forcings from anthropogenic sulfate aerosols, solar variability, and volcanic aerosols in the stratosphere are added progressively, such that the fourth ensemble uses all four of these forcings. The effects of anthropogenic sulfate aerosols are represented by changes in surface albedo, and the effects of volcanic aerosols are represented by latitude-dependent perturbations in incident solar radiation. Comparisons with observations reveal that the addition of the natural forcings (solar and volcanic) improves the simulation of global multidecadal trends in temperature, precipitation, and ocean heat content. Solar and volcanic forcings are important contributors to early twentieth century warming. Volcanic forcing reduces the warming simulated for the late twentieth century. Interdecadal variations in global mean surface air temperature from the ensemble of experiments with all four forcings are very similar to observed variations during most of the twentieth century. The improved agreement of simulated and observed temperature trends when natural climate forcings are included supports the climatic importance of variations in radiative forcing during the twentieth century.
The transient responses of two versions of the Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model to a climate change forcing scenario are examined. The same computer codes were used to construct the atmosphere, ocean, sea ice and land surface components of the two models, and they employ the same types of sub-grid-scale parameterization schemes. The two model versions differ primarily, but not solely, in their spatial resolution. Comparisons are made of results from six coarse-resolution R15 climate change experiments and three medium-resolution R30 experiments in which levels of greenhouse gases (GHGs) and sulfate aerosols are specified to change over time. The two model versions yield similar global mean surface air temperature responses until the second half of the 21st century, after which the R15 model exhibits a somewhat larger response. Polar amplification of the Northern Hemisphere's warming signal is more pronounced in the R15 model, in part due to the R15's cooler control climate, which allows for larger snow and ice albedo positive feedbacks. Both models project a substantial weakening of the North Atlantic overturning circulation and a large reduction in the volume of Arctic sea ice to occur in the 21st century. Relative to their respective control integrations, there is a greater reduction of Arctic sea ice in the R15 experiments than in the R30 simulations as the climate system warms. The globally averaged annual mean precipitation rate is simulated to increase over time, with both model versions projecting an increase of about 8% to occur by the decade of the 2080s. While the global mean precipitation response is quite similar in the two models, regional differences exist, with the R30 model displaying larger increases in equatorial regions.
Several indices of large-scale patterns of surface temperature variation were used to investigate climate change in North America over the 20th century. The observed variability of these indices was simulated well by a number of climate models. Comparison of index trends in observations and model simulations shows that North American temperature changes from 1950 to 1999 were unlikely to be due to natural climate variation alone. Observed trends over this period are consistent with simulations that include anthropogenic forcing from increasing atmospheric greenhouse gases and sulfate aerosols. However, most of the observed warming from 1900 to 1949 was likely due to natural climate variation.
A review is presented of the development and simulation characteristics of the most recent version of a global coupled model for climate variability and change studies at the Geophysical Fluid Dynamics Laboratory, as well as a review of the climate change experiments performed with the model. The atmospheric portion of the coupled model uses a spectral technique with rhomboidal 30 truncation, which corresponds to a transform grid with a resolution of approximately 3.75° longitude by 2.25° latitude. The ocean component has a resolution of approximately 1.875° longitude by 2.25° latitude. Relatively simple formulations of river routing, sea ice, and land surface processes are included. Two primary versions of the coupled model are described, differing in their initialization techniques and in the specification of sub-grid scale oceanic mixing of heat and salt. For each model a stable control integration of near milennial scale duration has been conducted, and the characteristics of both the time-mean and variability are described and compared to observations. A review is presented of a suite of climate change experiments conducted with these models using both idealized and realistic estimates of time-varying radiative forcing. Some experiments include estimates of forcing from past changes in volcanic aerosols and solar irradiance. The experiments performed are described, and some of the central findings are highlighted. In particular, the observed increase in global mean surface temperature is largely contained within the spread of simulated global mean temperatures from an ensemble of experiments using observationally-derived estimates of the changes in radiative forcing from increasing greenhouse gases and sulfate aerosols.
Church, J A., Jonathan M Gregory, P Huybrechts, M Kuhn, K Lambeck, M T Nhuan, D Qin, P A Woodworth, O A Anisimov, F O Bryan, A Cazenave, Keith W Dixon, B B Fitzharris, G M Flato, A Ganopolski, V Gornitz, J A Lowe, A Noda, J M Oberhuber, S P O'Farrell, A Ohmura, M Oppenheimer, W R Peltier, S C B Raper, C Ritz, G Russell, E Schlosser, C K Shum, T F Stocker, Ronald J Stouffer, R S W van der Wal, R Voss, E C Wiebe, M Wild, Duncan J Wingham, and H J Zwally, 2001: Changes in sea level In Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, UK, Cambridge University Press, 640-693.
Gregory, Jonathan M., J A Church, G J Boer, Keith W Dixon, G M Flato, D R Jackett, J A Lowe, S P O'Farrell, M M Rienecker, G Russell, Ronald J Stouffer, and Michael Winton, 2001: Comparison of results from several AOGCMs for global and regional sea-level change 1900-2100. Climate Dynamics, 18(3/4), 225-240. Abstract PDF
Sea-level rise is an important aspect of climate change because of its impact on society and ecosystems. Here we present an intercomparison of results from ten coupled atmosphere-ocean general circulation models (AOGCMs) for sea-level changes simulated for the twentieth century and projected to occur during the twenty first century in experiments following scenario 1892a for greenhouse gases and sulphate aerosols. The model results suggest that the rate of sea-level rise due to thermal expansion of sea water has increased during the twentieth century, but the small set of tide gauges with long records might not be adequate to detect this acceleration. The rate of sea-level rise due to thermal expansion continues to increase throughout the twenty first century, and the projected total is consequently larger than in the twentieth century; for 1990-2090 it amounts to 0.20-0.37 m. This wide range results from systematic uncertainty in modeling of climate change and of heat uptake by the ocean. The AOGCMs agree that sea-level rise is expected to be geographically non-uniform, with some regions experiencing as much as twice the global average, and others practically zero, but they do not agree about the geographical pattern. The lack of agreement indicates that we cannot currently have confidence in projections of local sea-level changes, and reveals a need for detailed analysis and intercomparison in order to understand and reduce the disagreements.
We compared the temporal variability of the heat content of the world ocean, of the global atmosphere, and of components of Earth's cryosphere during the latter half of the 20th century. Each component has increased its heat content (the atmosphere and the ocean) or exhibited melting (the cryosphere). The estimated increase of observed global ocean heat content (over the depth range from 0 to 3000 meters) between the 1950s and 1990s is at least one order of magnitude larger than the increase in heat content of any other component. Simulation results using an atmosphere-ocean general circulation model that includes estimates of the radiative effects of observed temporal variations in greenhouse gases, sulfate aerosols, solar irradiance, and volcanic aerosols over the past century agree with our observation-based estimate of the increase in ocean heat content. The results we present suggest that the observed increase in ocean heat content may largely be due to the increase of anthropogenic gases in Earth's atmosphere.
McAveney, B, Anthony J Broccoli, Keith W Dixon, and Ronald J Stouffer, et al., 2001: Model evaluation In Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, UK, Cambridge University Press, 472-523.
Time lags between changes in radiative forcing and the resulting simulated climate responses are investigated in a set of transient climate change experiments. Both surface air temperature (SAT) and soil moisture responses are examined. Results suggest that if the radiative forcing is held fixed at today's levels, the global mean SAT will rise an additional 1.0K before equilibrating. This unrealized warming commitment is larger than the 0.6K warming observed since 1990. The coupled atmosphere-ocean GCM's transient SAT response for the year 2000 is estimated to be similar to its equilibration response to 1980 radiative forcings - a lag of ~20 years. Both the time lag and the warming commitment are projected to increase in the future, and depend on the model's climate sensitivity, oceanic heat uptake, and the forcing scenario. These results imply that much of the warming due to current greenhouse gas levels is yet to be realized.
Delworth, Thomas L., and Keith W Dixon, 2000: Implications of the recent trend in the Arctic/North Atlantic Oscillation for the North Atlantic Thermohaline Circulation. Journal of Climate, 13(21), 3721-3727. Abstract PDF
Most projections of greenhouse gas-induced climate change indicate a weakening of the thermohaline circulation (THC) in the North Atlantic in response to increased freshening and warming in the subpolar region. These changes reduce high-latitude upper-ocean density and therefore weaken the THC. Using ensembles of numerical experiments with a coupled ocean-atmosphere model, it is found that this weakening could be delayed by several decades in response to a sustained upward trend in the Arctic/North Atlantic oscillation during winter, such as has been observed over the last 30 years. The stronger winds over the North Atlantic associated with this trend extract more heat from the ocean, thereby cooling and increasing the density of the upper ocean and thus opposing the previously described weakening of the THC. This result is of particular importance if the positive trend in the Arctic/North Atlantic oscillation is a response to increasing greenhouse gases, as has been recently suggested.
The mechanism by which the model-simulated North Atlantic thermohaline circulation (THC) weakens in response to increasing greenhouse gas (GHG) forcing is investigated through the use of a set of five multi-century experiments. Using a coarse resolution version of the GFDL coupled climate model, the role of various surface fluxes in weakening the THC is assessed. Changes in net surface freshwater fluxes (precipitation, evaporation, and runoff from land) are found to be the dominant cause for the model's THC weakening. Surface heat flux changes brought about by rising GHG levels also contribute to THC weakening, but are of secondary importance. Wind stress variations have negligible impact on the THC's strength in the transient GHG experiment.
Dixon, Keith W., and John R Lanzante, 1999: Global mean surface air temperature and North Atlantic overturning in a suite of coupled GCM climate change experiments. Geophysical Research Letters, 26(13), 1885-1888. Abstract PDF
The effects of model initial conditions and the starting time of transient radiative forcings on global mean surface air temperature (SAT) and the North Atlantic thermohaline circulation (THC) are studied in a set of coupled climate GCM experiments. Nine climate change scenario experiments, in which the effective levels of greenhouse gases and tropospheric sulfate aerosols vary in time, are initialized from various points in a long control model run. The time at which the transition from constant to transient radiative forcing takes place is varied in the scenario runs, occurring at points representing either year 1766, 1866 or 1916. The sensitivity of projected 21st century global mean SATs and the THC to the choice of radiative forcing transition point is small, and is similar in magnitude to the variability arising from variations in the coupled GCM's initial three-dimensional state.
Analyses are conducted to assess whether simulated trends in SST and land surface air temperature from two versions of a coupled ocean-atmosphere model are consistent with the geographical distribution of observed trends over the period 1949-1997. The simulated trends are derived from model experiments with both constant and time-varying radiative forcing. The models analyed are low-resolution (R15, ~4º) and medium-resolution (R30, ~2º) versions of the Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model. Internal climate variability is estimated from long control integrations of the models with no change of external forcing. The radiatively forced trends are based on ensembles of integrations using estimated past concentrations of greenhouse gases and direct effects of anthropogenic sulfate aerosols (G+S). For the regional assessment, the observed trends at each grid point with adequate temporal coverage during 1949-1997 are first compared with the R15 and R30 model unforced internal variability. Nearly 50% of the analyzed areas have observed warming trends exceeding the 95th percentile of trends from the control simulations. These results suggest that regional warming trends over much of the globe during 1949-1997 are very unlikely to have occurred due to internal climate variability alone and suggest a role for a sustained positive thermal forcing such as increasing greenhouse gases. The observed trends are then compared with the trend distributions obtained by combining the ensemble mean G+S forced trends with the internal variability "trend" distributions from the control runs. Better agreement is found between the ensemble mean G+S trends and the observed trends than between the model internal variability alone and the observed trends. However, the G+S trends are still significantly different from the observed trends over about 30% of the areas analyzed. Reasons for these regional inconsistencies between the simulated and the observed trends include possible deficiencies in (1) specified radiative forcings, (2) simulated responses to specified radiative forcings, (3) simulation of internal climate variability, or (4) observed temperature records.
Stouffer, Ronald J., and Keith W Dixon, 1998: Initialization of coupled models for use in climate studies: A review In Research Activities in Atmospheric and Oceanic Modelling, WMO/TD No. 865, Geneva, Switzerland, World Meteorological Organization, I.1-I.8.
Dixon, Keith W., J L Bullister, R H Gammon, and Ronald J Stouffer, 1996: Examining a coupled climate model using CFC-11 as an ocean tracer. Geophysical Research Letters, 23(15), 1957-1960. Abstract PDF
Anthropogenic CFC-11 dissolved in seawater is used to analyze ocean ventilation simulated in a global coupled air-sea model. Modeled CFC-11 distributions are compared to observations gathered on three Southern Hemisphere research cruises. The total amount of CFC-11 absorbed by the model's Southern Ocean is realistic, though some notable differences in the vertical structure exist. Observed and simulated CFC-11 distributions are qualitatively consistent with the coupled model's predictions that the ocean may delay greenhouse gas-induced warming of surface air temperatures at high southern latitudes. The sensitivity of model-predicted CFC-11 levels in the deep Southern Ocean to the choice of gas exchange parameterization suggests that quantitative assessments of model performance based upon simulated CFC-11 distributions can be limited by air-sea gas flux uncertainties in areas of rapid ocean ventilation. Such sensitivities can complicate the quantitative aspects of CFC-11 comparisons between models and observations, and between different models.
Toggweiler, J R., Keith W Dixon, and W Broecker, 1991: The Peru upwelling and the ventilation of the South Pacific thermocline. Journal of Geophysical Research, 96(C11), 20,467-20,497. Abstract PDF
A reconstruction of the prebomb Δ 14C distribution in the tropical Pacific using data from old coral heads shows that surface waters with the lowest Δ 14C content are found distinctly south of the equator. Prebomb, low-Δ 14C surface water appears to owe its origin to the upwelling of ~15°C water off the coast of Peru. The low-Δ 14C water upwelling off Peru is shown to be derived from the "13° Water" thermostad (11° - 14°C) of the Equatorial Undercurrent. Untritiated water in the lower part of the undercurrent had nearly the same Δ 14C content during the Geochemical Ocean Sections Study (GEOSECS) as the prebomb growth bands in Druffel's (1981) Galapagos coral. Similar Δ 14C levels were observed in 9° - 10°C water in the southwest Pacific thermocline in the late 1950s. We suggest that the low-Δ 14C water upwelling off Peru and the thermostad water in the undercurrent both originate as ~8°C water in the subantarctic region of the southwest Pacific. This prescription points to the "lighter variety" of Subantarctic Mode Water (7° - 10°C) as a possible source. Because prebomb Δ 14C is so weakly forced by exchange of carbon isotopes with the atmosphere, thermocline levels of Δ 14C should be particularly unaffected by diapycnal mixing with warmer overlying water types. We argue that successively less dense features of the South Pacific thermocline, like the Subantarctic Mode Water, the equatorial 13°C Water, and the Peru upwelling, may be part of a single process of thermocline ventilation. Each evolves from the other by diapycnal alteration, while prebomb Δ 14C is nearly conserved. Detailed comparisons are made between the coral Δ 14C distribution and a model simulation of radiocarbon in Toggweiler et al. (1989). While the Δ 14C data suggest a southern hemisphere thermocline origin for the equatorial Δ 14C minimum, the model produces its Δ 14C minimum by upwelling abyssal water to the surface via the equatorial divergence. In an appendix to the paper we present a new set of coral Δ 14C measurements produced over the last 10 years at Lamont-Doherty Geological Observatory and compile a post -1950 set of published coral Δ 14C measurements for use in model validation studies.
Toggweiler, J R., Keith W Dixon, and Kirk Bryan, 1989: Simulations of radiocarbon in a coarse-resolution world ocean model 1. Steady state prebomb distributions. Journal of Geophysical Research, 94(C6), 8217-8242. Abstract PDF
This paper presents the results of five numerical simulations of the radiocarbon distribution in the ocean using the Geophysical Fluid Dynamics Laboratory primitive equation world ocean general circulation model. The model has a 4.5 degree latitude by 3.75 degree longitude grid, 12 vertical levels, and realistic continental boundaries and bottom topography. The model is forced at the surface by observed, annually averaged temperatures, salinities, and wind stresses. There are no chemical transformations or transport of 14C by biological processes in the model. Each simulation in this paper has been run out the equivalent of several thousand years to simulate the natural, steady state distribution of 14C in the ocean. In a companion paper the final state of these simulations is used as the starting point for simulations of the ocean's transient uptake of bomb-produced 14C. The model reproduces the mid-depth 14C minimum observed in the North Pacific and the strong front near 45 degrees S between old, deep Pacific waters and younger circumpolar waters. In the Atlantic, the model's deep 14C distribution is much too strongly layered with relatively old water from the Antarctic penetrating into the northern reaches of the North Atlantic basin. Two thirds of the decay of 14C between 35 degrees S and 35 degrees N is balanced by local 14C input from the atmosphere and downward transport by vertical mixing (both diffusion and advective stirring). Only one third is balanced by transport of 14C from high latitudes. A moderately small mixing coefficient of 0.3 cm2 s-1 adequately parameterizes vertical diffusion in the upper kilometer. Spatial variation in gas exchange rates is found to have a negligible effect on deepwater radiocarbon values. Ventilation of the circumpolar region is organized in the model as a deep overturning cell which penetrates as much as 3500 m below the surface. While allowing the circumpolar deep water to be relatively well ventilated, the overturning cell restricts the ventilation of the deep Pacific and Indian basins to the north. This study utilizes three different realizations of the ocean circulation. One is generated by a purely prognostic model, in which only surface temperatures and salinities are restored to observed values. Two are generated by a semidiagnostic model, in which interior temperatures and salinities are restored toward observed values with a 1/50 year-1 time constant. The prognostic version is found to produce a clearly superior deep circulation in spite of producing interior temperatures and salinities which deviate very noticeably from observed values. The weak restoring terms in the diagnostic model suppress convection and other vertical motions, causing major disruptions in the diagnostic model's deep sea ventilation.
Toggweiler, J R., Keith W Dixon, and Kirk Bryan, 1989: Simulations of radiocarbon in a coarse-resolution world ocean model 2. Distributions of bomb-produced carbon 14. Journal of Geophysical Research, 94(C6), 8243-8264. Abstract PDF
Part 1 of this study examined the ability of the Geophysical Fluid Dynamics Laboratory (GFDL) primitive equation ocean general circulation model to simulate the steady state distribution of naturally produced 14C in the ocean prior to the nuclear bomb tests of the 1950s and early 1960s. In part 2 we begin with the steady state distributions of part 1 and subject the model to the pulse of elevated atmospheric 14C concentrations observed since the 1950s. This study focuses on the processes and time scales which govern the transient distributions of bomb 14C in the upper kilometer of the ocean. Model projections through 1990 are compared with observations compiled by the Geochemical Ocean Sections Study (GEOSECS) in 1972, 1974, and 1978; the Transient Tracers in the Ocean (TTO) expedition in 1981, and the French INDIGO expeditions in 1985-1987. In their analysis of the GEOSECS 14C observations, Broecker et al. (1985) noted that much of the bomb 14C which entered the ocean's equatorial belts prior to GEOSECS accumulated in the adjacent subtropical zones. Broecker et al. argued that this displacement of bomb 14C inventories was caused by the wind-driven upwelling and surface divergence in the tropics combined with convergent flow and downwelling in the subtropics. Similar displacements were invoked to shift bomb 14C from the Antarctic circumpolar region into the southern temperate zone. The GFDL model successfully reproduces the observed GEOSECS inventories, but then predicts a significantly different pattern of bomb 14C uptake in the decade following GEOSECS. The post-GEOSECS buildup of bomb 14C inventories is largely confined to the subthermocline layers of the North Atlantic, the lower thermocline of the southern hemisphere, and down to 2000 m in the circumpolar region. A great deal of attention is devoted to detailed comparisons between the model and the available radiocarbon data. A number of flaws in the model are highlighted by this analysis. The Subantarctic Mode Waters forming along the northern edge of the circumpolar current are identified as a very important process for carrying bomb 14C into the thermoclines of the southern hemisphere. The model concentrates its mode water formation in a single sector of the circumpolar region and consequently fails to form its mode waters with the correct T-S properties. The model also moves bomb 14C into the deep North Atlantic and deep circumpolar region much too slowly.
Dixon, Keith W., and R P Harnack, 1986: Effect of intraseasonal circulation variability on winter temperature forecast skill. Monthly Weather Review, 114(1), 208-214. Abstract PDF
The prediction of winter temperatures in the U.S. from Pacific sea surface temperatures was examined by using a jackknifed regression scheme and a measure of intraseasonal atmospheric circulation variability. Using a jackknifed regression methodology when deriving objective prediction equations permitted forecast skill to be better quantified than in past studies by greatly increasing the effective independent sample size. The procedures were repeated on three data sets: 1) all winters in the period 1950-1979 (30 winters), 2) the 15 winters having the highest Variability Index (VI), and 3) the 15 winters having the lowest VI. The VI was constructed to measure the intraseasonal variability of five-day-period mean 700-mb heights for a portion of the Northern Hemisphere. Verification results showed that statistically significant skill was achieved in the complete sample (overall mean percent correct of 39 and 59 for three- and two-category forecasts, respectively), but improved somewhat for the low VI sample. In that case, corresponding scores were 44 and 64% correct. In contrast, the high VI sample scores were lower (34 and 58% correct) than for the complete sample, indicating that skill is probably dependent upon the degree of intraseasonal circulation variability.