Rough estimates of the WMGG (well-mixed greenhouse gas — red) and non-WMGG (blue) components of the global mean temperature time series obtained from observed (HADCRUT4) Northern and Southern Hemisphere mean temperatures and different assumptions about the ratio of the Northern to Southern Hemisphere responses in these two components. Black lines are estimates of the response to WMGG forcing for 6 different values of the transient climate response TCR (1.0, 1.2, 1.4, 1.6, 1.8, 2.0C).
How can we use the spatial pattern of the surface temperature evolution to help determine how much of the warming over the past century was forced by increases in the well-mixed greenhouse gases (WMGGs: CO2, CH4, N2O, CFCs), assuming as little as possible about the non-WMGG forcing and internal variability. Here is a very simple approach using only two functions of time, the mean Northern and Southern Hemisphere temperatures. (See #7, #27, #35 for related posts.)
Schematic of the response of tropical rainfall to high latitude warming in one hemisphere and cooling in the other or, equivalently, to a cross-equatorial heat flux in the ocean. From Kang et al 2009.
When discussing the response of the distribution of precipitation around the world to increasing CO2 or other forcing agents, I think you can make the case for the following three basic ingredients:
the tendency for regions in which there is moisture convergence to get wetter and regions in which there is moisture divergence to get drier (“wet get wetter and dry get drier”) in response to warming (due to increases in water vapor in the lower troposphere — post #13);
the tendency for the subtropical dry zones and the mid-latitude storm tracks to move polewards with warming;
the tendency for the tropical rainbelts to move towards the hemisphere that warms more.
There are other important elements we could add to this set, especially if one focuses on particular regions — for example, changes in ENSO variability would affect rainfall in the tropics and over North America in important ways . But I think a subset of these three basic ingredients, in some combination, are important nearly everywhere. I want to focus here on 3) the effect on tropical rain belts of changing interhemispheric gradients.
Lower panel: the observed (irrotational) component of the horizontal eddy sensible heat flux at 850mb in Northern Hemisphere in January along with the mean temperature field at this level. Middle panel: a diffusive approximation to that flux. Upper panel: the spatially varying kinematic diffusivity (in units of ) used to generate the middle panel. From Held (1999) based on Kushner and Held (1998).
Let’s consider the simplest atmospheric model with diffusive horizontal transport on a sphere:
Here is the energy input into the atmosphere as a function of latitude , is the outgoing infrared flux linearized about some reference temperature , is the heat capacity of a tropospheric column per unit horizontal area , and is a kinematic diffusivity with units of (length)2/time. Think of the energy input as independent of time and, for the moment, think of as just a constant.
(Left) Sea surface temperature averaged over the North Atlantic (75-7.5W, 0-60N), in the HADGEM2-ES model (ensemble mean red; standard deviation yellow) compared with observations (black), as discussed in Booth et al 2012. (Right) Upper ocean (< 700m) heat content in this model averaged over the same area, from Zhang et al 2013 ( green = simulation with no anthropogenic aerosol forcing, kindly provided by Ben Booth.)
A paper by Booth et al 2012 has attracted a lot of attention because of the claim it makes that the interdecadal variability in the North Atlantic is in large part the response to external forcing agents, aerosols in particular, rather than internal variability. This has implications for estimates of (transient) climate sensitivity but it also has very direct implications for our understanding of important climate variations such as the recent upward trend in Atlantic hurricane activity (linked to the recent rapid increase in N.Atlantic sea surface temperatures) and drought in the Sahel in the 1970′s (linked to the cool N. Atlantic in that decade). I am a co-author of a recent paper by Rong Zhang and others (Zhang et al 2013) in which we argue that the Booth et al paper and the model on which it is based do not make a compelling case for this claim.
Anomalies in near surface air temperature over land (1979-2008) averaged over Asia and the months of June-July-August from CRUTEM4 (green) — and as simulated by atmosphere/land models in which oceanic boundary conditions are prescribed to follow observations (gray shading). See text and Post #32 for details.
This is a follow up to Post #32 on Northern Hemisphere land temperatures as simulated in models in which sea surface temperatures (SSTs) and sea ice extent are prescribed to follow observations. I am interested in whether we can use simulations of this “AMIP” type to learn something about how well a climate model is handling the response of land temperatures to different forcing agents such as aerosols and well-mixed greenhouse gases. If a model forced with prescribed SST/ice boundary conditions and prescribed variations in the forcing agents does a reasonably good job of simulating observations, we can then ask how much of this response is due to the SST variations and how much is due to the forcing agents (assuming linearity). If the response to SST variations is robust enough, we have a chance to subtract it off and see if different assumptions about aerosol forcing, in particular, improve or degrade the fit to observations.
Globally integrated, annual mean tropical cyclone (TC) and hurricane frequency simulated in the global model described in Post #2, as a function of a parameter in the model’s sub-grid moist convection closure scheme, from Zhao etal 2012.
It is difficult to convey to non-specialists the degree to which climate models are based on firm physical theory on the one hand, or tuned (I actually prefer optimized) to fit observations on the other. Rather than try to provide a general overview, it is easier to provide examples. Here is one related to post #2 in which I described the simulation of hurricanes in an atmospheric model.
Anomalies in annual mean near surface air temperature over land (1979-2008), averaged over the Northern Hemisphere, from CRUTEM4 (green) and as simulated by an ensemble of atmosphere/land models in which oceanic boundary conditions are prescribed to follow observations.
As discussed in previous posts, it is interesting to take the atmosphere and land surface components of a climate model and run it over sea surface temperatures (SSTs) and sea ice extents that, in turn, are prescribed to evolve according to observations. In Post #2 I discussed simulations of trend and variability in hurricane frequency in such a model, and Post #21 focused on the vertical structure of temperature trends in the tropical troposphere. A basic feature worth looking at in this kind of model is simply the land temperature – or, more precisely, the near-surface air temperature over land. How well do models simulate temperature variations and trends over land when SSTs and ice are specified? These simulations are referred to as AMIP simulations, and there are quite a few of these in the CMIP5 archive, covering the period 1979-2008.
Relative humidity evolution over one year in a 50km resolution atmospheric model
in the upper (250hPa) and lower (850hPa) troposphere.
In their 1-D radiative-convective paper of 1967, Manabe and Wetherald examined the consequences for climate sensitivity of the assumption that the tropospheric relative humidity (RH) remains fixed as the climate is warmed by increasing CO2. In the first (albeit rather idealized) GCM simulation of the response of climate to an increase in CO2, the same authors found, in 1975, that water vapor did increase throughout the model troposphere at roughly the rate needed to maintain fixed RH. The robustness of this result in the world’s climate models in the intervening decades has been impressive to those of us working with these models, given the differences in model resolution and the underlying algorithms, a robustness in sharp contrast to the diversity of cloud feedbacks in these same models.
Percentage change in the precipitation falling on days within which the daily precipitation is above the pth percentile (p is horizontal axis) as a function of latitude and averaged over longtitude, over the 21st century in a GCM projection for a business-as-usual scenario, from Pall et al 2007.
(I have added a paragraph under element (1) below in response to some off-line comments — Aug 15)
When I think about global warming enhancing “extremes”, I tend to distinguish in my own mind between different aspects of the problem as follows (there is nothing new here, but these distinctions are not always made very explicit):
1) increases in the frequency of extreme high temperatures that result from an increase in the mean of the temperature distribution without change in the shape of the distribution or in temporal correlations
The assumption that the distribution about the mean and correlations in time do not change certainly seems like an appropriately conservative starting point. But if you look far out on the distribution, the effects on the frequency of occurrence of days above a fixed high temperature, or of consecutive occurrences of very hot days (heat waves), can be surprisingly large. Just assuming a normal distribution, or playing with the shape of the tails of the distribution, and asking simple questions of this sort can be illuminating. I’m often struck by the statement that “we don’t care about the mean; we care about extremes” when these two things are so closely related (in the case of temperature). Uncertainty in the temperature response translates directly into uncertainty in changes in extreme temperatures in this fixed distribution limit. It would be nice if, in model projections, it was more commonplace to divide up the responses in extreme temperatures into a part due just to the increase in mean and a part due to everything else. It would make it easier to see if there was much that was robust across models in the “everything else” part. And it also emphasizes the importance of comparing the shape of the tails of the distributions in models and observations. Of course from this fixed-distribution perspective every statement about the increase in hot extremes is balanced by one about decreases in cold extremes.
Animation of the sea surface temperature in a coupled climate model under development at GFDL,
the ocean component having an average resolution of roughly 0.1 degree latitude and longitude.
Click HERE for the animation.
(Visualization created by Remik Ziemlinski; model developed by T. Delworth, A. Rosati, K. Dixon, W. Anderson using MOM4as the oceanic code base.)
As models gradually move to finer spatial resolution we naturally expect to gradually improve our simulations of atmospheric and oceanic flows. But things get especially interesting when one passes thresholds at which new phenomena are simulated that were not present in anything like a realistic form at lower resolution. The animation illustrates what happens after one passes through an important oceanic threshold, allowing mesoscale eddies to form, filling the oceanic interior with what we refer to as geostrophic turbulence. At resolutions too coarse to simulate the formation of these eddies, flows in ocean models tend to be quite laminar except for some relatively large scale instabilities of intense currents of the kind seen in the snapshot north of the equator in the Eastern Pacific. (For a transition comparably fundamental in atmospheric models, one has to turn to the point at which global models begin to resolve the deep convective elements in the tropical atmosphere — see for example Post #19).