I find the following simple two degree-of-freedom linear model useful when thinking about transient climate responses:
and are meant to represent the perturbations to the global mean surface temperature and deep ocean temperature resulting from the radiative forcing . The strength of the radiative restoring is determined by the constant , which subsumes all of the radiative feedbacks — water vapor, clouds, snow, sea ice — that attract so much of our attention. The exchange of energy with the deep ocean is assumed to be proportional to the difference in the temperature perturbations between the surface and the deep layers, with constant of proportionality . The fast time scale is set by , representing the heat capacity of the well-mixed surface layer, perhaps 50-100m deep on average (the atmosphere’s heat capacity is negligible in comparison), while crudely represents an effective heat capacity of the rest of the ocean. Despite the fact that it seems to ignore everything that we know about oceanic mixing and subduction of surface waters into the deep ocean, I think this model gets you thinking about the right questions.
The two box model reduces to the classic one-box model if :
The equilibrium response in this model, the response eventually achieved for a fixed forcing, is . The equilibrium response in this particular two-box model takes this same value, independent of .
These kinds of models are commonly used to help interpret the forced responses in much more elaborate GCMs. They are also often relied upon when discussing observational constraints on climate sensitivity. That is, one has a simple model with one or more parameters that control the magnitude of the response to a change in . One then uses the same model to simulate some observations (the response to a volcano, perhaps) to constrain the values of these parameters. Appreciating the limitations of the underlying model (there is always an underlying model) is often the key to understanding competing claims concerning these constraints on sensitivity.
You can write down the solution to the two-box model, but let’s just look at the special case in which is so large that the change in is negligible. We then have
so the time scale of the response to an abrupt change in forcing is . Surface temperature perturbations decay not just by radiating to space but also by losing energy to the deep ocean. A typical value of that one would use to mimic the behavior of a GCM might be about 4 years, if we can use the results described in post #3 as a guide.
But now suppose that varies only on time scales longer than , continuing to assume that these time scales are short compared to the time required to modify significantly. Then
On these time scales both heat capacities drop out. The fast component is equilibrated, while the slow component is acting as an infinite reservoir. What is left is a response that is proportional to the forcing, with the deep ocean uptake acting as a negative feedback with strength . I’ll refer to the time scales on which this balance holds as the intermediate regime. Gregory and Forster (2008) call the climate resistance. Dufresne and Bony (2008) is also a very useful reference.
The transient climate response, or TCR, is traditionally defined in terms of a particular calculation with a climate model: starting in equilibrium, increase at a rate of 1% per year until the concentration has doubled (about 70 years). The amount of warming around the time of doubling is referred to as the TCR. If the is then held fixed at this value, the climate will continue to warm slowly until it reaches . To the extent that this 70 year ramp-up qualifies as being in the intermediate regime, the ratio of TCR to would be in the two-box model.
The median of this ratio in the particular ensemble of GCMs referred to in the figure at the top of this post is 0.56. For several models the ratio is less than 0.5. Interestingly, it is very difficult to get ratios this small from the two-box model tuned to the equilibrium sensitivity of the GCMs and to their rate of heat uptake in transient simulations. (There is a fair amount of slop in these numbers – equilibrium responses are typically estimated with “slab-ocean” models in which changes in oceanic heat fluxes are neglected, and the transient simulations are single realizations — but I doubt that the basic picture would change much if refined.)
The heat uptake efficiency () is defined to be the rate of heat uptake by the planet (the oceans to a first approximation) per unit global warming. Typical values in GCMs are Dufresne and Bony (2008). For a warming of 0.8 K over the past century, this magnitude of implies a rate of heat uptake at present of about . This value is consistent with Lyman et al, 2010 (their best estimate of the rate of heat uptake by the upper 700m of the ocean over the period 1993-2008 is 0.64 W/m2 ). In the two-box model, if the heat stored in the surface layer is small compared to that in the oceanic interior.
For the particular GCM CM2.1 discussed in post #3 (which has one of the smaller ratios of TCR to equilibrium response), using the numbers in that post,we need and to explain the model’s TCR and equilibrium responses. This would seem to require 1.3, much larger than this GCM’s value, which is close to 0.7. I’ll return to this discrepancy in the next post.
A few thoughts about heat uptake:
- Thinking linearly, when the system is perturbed, the unperturbed flow transports the perturbed temperatures, and the perturbed flow transports the unperturbed temperatures. Only if the former is dominant should we expect that the uptake to be proportional to the temperature perturbation. We might conceivably be able to think of the change in circulation as determined by the temperature response (if the changes in other things that affect the circulation, like salinities and wind stresses, can themselves be thought of as determined by the temperature field, but the circulation takes time to adjust and these time scales would destroy the simplicity of the intermediate regime if the circulation responses are dominant.
- Even if circulation changes are not dominant, the coupling to the deep oceans is strongest in the North Atlantic and the Southern Oceans, so the temperature anomalies in those regions presumably have more to do with the uptake than the global mean. Only if the forced warming is separable in space and time — do we have any reason to expect the uptake to scale with the global mean temperature. A changing mix of greenhouse gas and aerosol forcing is the easiest way to break this separability.
- Do we have any simple theories explaining the rough magnitude of (or ) to supplement GCM simulations?
[The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.]