Vertical diffusion of heat has often been used as a starting point for thinking about ocean heat uptake associated with forced climate change. I have chosen instead to use simple box models for this purpose in these posts because they are easier to manipulate but also because I don’t feel that the simplest diffusive models bring you much closer to the underlying ocean dynamics. But diffusion does provide a simple way of capturing the qualitative idea that deeper layers of the ocean, and larger heat capacities, become involved more or less continuously as the time scales increase. So let’s take a look at the simplest possible diffusive model for the global mean temperature response. This model is typically embellished with a surface box, representing the ocean mixed layer, as well as by some attempt at capturing advective rather than diffusive transport (ie * Hoffert, et al 1980*, and

Our equation for the ocean interior is the linear diffusion equation with depth positive downwards and constant thermal diffusivity

or

where is the heat capacity of water per unit mass, the density of water, and is a kinematic diffusivity with units of . The boundary condition at the surface is

where , the radiative forcing, is a prescribed function of time. I am assuming that the ocean is infinitely deep. Fits of this simplest diffusive model to CMIP5 GCM output for idealized forcing scenarios are discussed by * Caldeira and Myhrvold 2013*.

If we write the boundary condition in terms of the kinematic diffusivity, , the radiative restoring appears in the combination

which has units of velocity. Plugging in 10^{3} Kg/m^{3} and = 4.22 x 10^{3} J/(Kg K) and, for example, 2.0W/(m^{2} K), we get 4.74 x 10^{-7} m/s for this velocity, or 15 m/year. One way of thinking about this velocity scale is to pick a time scale and compute how deep takes you in this amount of time, which tells you the depth of the layer whose heat capacity gives you a radiative relaxation time for this layer comparable to the time scale that you chose — I think I got that right.] It’s magnitude gives you some feeling for why the oceans are effectively very deep in many climate change contexts.

Together with the kinematic diffusivity we can now define a depth scale and a time scale :

Climate sensitivity in this simple model is inversely proportional to , which is assumed to be independent of time here for simplicity. So the depth scale is directly proportional to climate sensitivity, while the time scale is quadratic in the climate sensitivity. This strong dependence of the characteristic time scale on climate sensitivity, with the response of high sensitivity models much slower than in low sensitivity models, is a much commented on feature of this diffusive model. A typical value of the kinematic diffusivity obtained from fits to GCMs is around 5 x 10^{-5} m^{2}/s, giving a time scale of about 7 years for = 2 and about 28 years for =1.

Defining non-dimensional depth and time , we get for the response to a step increase in forcing, :

with the initial condition and the boundary condition at

.

Once you non-dimensionalize in this way there are no parameters in the problem at all and you only need to solve the equation once. As in the previous post the response to a spike in forcing is then and the response to a arbitrary time-dependence in is

The step-response function is shown below. (I generated this pretty quickly so don’t use it for anything important without checking). Also shown in the figure is a rough approximation to that gets better for large times

Since the heat uptake by the diffusion is , the heat uptake efficiency (heat uptake per unit temperature) in this approximation is — or, returning to the dimensional form, — decreasing like with increasing time. You can use this approximation for , the response to a step function in forcing, to estimate the response for arbitrary forcing evolution.

Assuming that the forcing is linearly increasing in time, we can compute the fraction of the equilibrium response that is realized at , as a function of — which we can equate to the TCR . (Increasing CO_{2} at 1% per year until doubling, which takes 70 years, is the standard way of defining TCR, and since the radiative forcing is logarithmic in CO2, this implies a linearly increasing forcing. In the context of the linear models being discussed here, the magnitude of the linear trend in forcing is irrelevant;l it is only the time scale of 70 years that is important, as well as the linear shape.) For example, corresponds to TCR/TEQ of about 60%. I have also shown a fit of the following form (which is impressively accurate)

TCR/TEQ .

Here’s a plot of results for a scenario in which the forcing increases linearly for 70 years and then stabilizes, for values of = 5, 20, 80 years, but normalized so that they all have the same temperature at year 70, ie the same TCR. It is interesting how tightly the different curves cluster during the growth stage when normalized in this way, separating as one would expect only after the forcing has stabilized.

Given the TCR for a particular choice of parameters in this diffusive model, we can also compute the response to the forcing due to well-mixed greenhouse gases (WMGGs) over the past 100 or so years. Does the warming due to the WMGG forcing, which increases monotonically but is far from linear, scale accurately with the TCR? The answer is yes. To see this I have divided the response to WMGGs by the TCR for different values of , and plotted them below. (I have used the * GISS WMGG forcing*. (For this purpose is it just the shape as a function of time that is relevant, not the amplitude.)

These are identical for most practical purposes, despite the large differences in the diffusive time scale controlling the degree of disequilibrium. I haven’t plotted the case with since that line would have to squeeze between the red and green lines and would be invisible. The claim supported by this plot is that we can confidently use the TCR to predict a model’s response over the 20th century to WMGG forcing. The concept of TCR is sometimes thought of as rather academic since there is no close analog of linearly increasing forcing for 70 years in reality. But in fact TCR provides us with precisely what we want when we try to attribute observed warming to increases in WMGGs. This identification is robust to large changes in heat uptake efficiency. Analyses of GCMs give the same result — see post #3, A statement about the likely range of TCR is equivalent to a statement about the likely size of the forced response to the well-mixed greenhouse gases, or to CO_{2} in isolation. This is the main reason that TCR is such a a useful quantity to focus on.

**[The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.]**

**Left: The response to instantaneous quadrupling of CO _{2} in three GFDL models, from Winton et al, 2013a. **

This is a continuation of post #49 on constraining the transient climate response (TCR) using the cooling resulting from a volcanic eruption, specifically Pinatubo. In order to make this connection, you need some kind of model that relates the volcanic response to the longer time scale response to an increase in CO_{2}. Our global climate models provide the logical framework for studying this connection. Using simple energy balance or linear response models to emulate the GCM behavior helps us understand what the models are saying. The previous post focused on one particular model, GFDL’s CM2.1. The figure on the right (from Merlis et al 2014 once again) compares the response to Pinatubo in CM2.1 with that in CM3, another of our models. The CM2 curve is an average over an ensemble of 20 realizations with different initial conditions; the CM3 curve is an average over 10 realizations. These two models have essentially the same ocean components but their atmospheric components differ in numerous ways. Most importantly for the present discussion, the different treatments of sub-grid moist convection result in CM3 being a more sensitive model to CO2 increase, whether measured by the TCR or the equilibrium response. One sees this difference in sensitivity in the left panel, showing the response to instantaneous quadrupling in three models, one of which is CM3. One of the others, ESM2M, is very closely similar to CM2.1 (it also has the option of simulating an interactive carbon cycle, driven by emissions rather than specified concentrations of CO_{2}, so is referred to as an Earth System Model.) ESM2G has the identical atmospheric component as ESM2M but a different ocean model. As discussed in Winton et al 2013a, the different ocean models have little effect on this particular metric. The analogous simulation with CM2.1 would be very close to the green and blue curves in the left panel. Evidently the temperature responses to Pinatubo are not providing any clear indication that CM3 is the more sensitive model.

What is clear from the left panel is that the large difference between CM3 and the CM2-based models begins to build up between 10 and 50 years after the increase in CO_{2}. The two models are close to each other during the initial (<10 year) fast response. Presumably this is why the fast response to the volcanic forcing is similar in the two models. (There is also a slower component to the volcanic response, but as discussed in #49 this is too small to see in the presence of the model’s noisy temperatures but is clearly seen in ocean heat content or sea level — * Stenchikov et al, 2009*.) Comparing the time integral of the temperature response over times less than 10 years or so with the integrated volcanic forcing provides a modest underestimate of the TCR in CM2.1, as discussed in #49; in CM3 this underestimate is much more substantial.

Winton et al 2013a provide a three time-scale fit to CM3′s response to instantaneous quadrupling of CO_{2} (their Table 5). Dividing by 2 to convert to doubling of CO_{2}, the result is

with = [1.5, 1.3, 1.8]K and = [3.3, 58, 1242] years, with = 3.5 W/m^{2}. The response to a -function spike in forcing is obtained by differentiating, ,

For an arbitrary time evolution of the forcing , you can then write the response as a sum over contributions from the forcing at each earlier time ,

where the forcing is assumed to vanish for . If you plug in a linearly increasing forcing reaching at year 70, you get a transient response of about 2.0K. You can get a feeling for how it might be difficult to infer TCR directly from the surface temperature response to a volcanic eruption by playing around with this expression.

The difference in the shapes of the response functions in CM2.1 and CM3 is important, putting aside the difference in sensitivity. A much larger fraction of the response by year 100 is realized in the first 10 years in CM2.1 than in CM3. These different shapes have implications for attribution and near term projection of the forced response. The plateau-ish character of CM2′s response is likely related to the behavior of the model’s Atlantic Meridional Overturning Circulation (AMOC). AMOC weakens in response to increasing CO2 in almost all models but then typically recovers slowly as the system equilibrates. Weaker AMOC results in colder North Atlantic and colder global mean (warming in the Southern Hemisphere is invariabily weaker than the cooling in the north). Consistently, AMOC strengthens in the CM2.1 Pinatubo simulations (Stenchikov et al, 2009). * Winton et al 2013b* examine a version of ESM2M in which the ocean currents are fixed and compare the response to CO2 (1%/year) in this model with the standard model in which currents, including AMOC, are free to change. The model with fixed currents warms more rapidly on these intermediate time scales. In this picture the plateau is not due to the absence of oceanic adjustment on multi-decadal time scales but due to a cancellation between the effects of the AMOC weakening and a gradual warming and reduction of heat uptake efficiency that would occur with fixed AMOC, as one might expect from something like a diffusive model of heat uptake.

The curious point is that ESM2M and CM3 share the same ocean model. The two models show similar reductions in AMOC in response to a warming perturbation. It is the atmospheres that are different between these two models. One hypothesis is that the different atmospheres respond differently to similar changes in AMOC, due to different cloud feedbacks perhaps, resulting in different shapes to their response functions on multi-decadal time scales. The importance of cloud feedacks for the response to full suppression of AMOC (generated by adding a lot of freshwater to the North Atlantic) is analyzed in CM2.1 in * Zhang et al, 2010*, although the focus there is on the changes in tropical rainfall rather than global mean temperature. If this picture is correct, it is interesting that modeling uncertainty in cloud feedback can result in uncertainty in the time evolution of global mean efficiency of heat uptake.

Once one moves beyond the two-time scale fit to three or more time scales a simple emulator with discrete time scales begins to lose its appeal, as compared to models that start from a picture of vertical diffusion or some other continuous process. The latter would potentially have fewer disposable parameters. In fact, some colleagues have questioned why I haven’t started from a diffusive picture in these posts. It doesn’t bring us much closer to the underlying physics (ie the vertical diffusivity that one ends up using to emulate GCMs on this 100 year time scale has no simple physical interpretation) but if one has to rely on the response to AMOC to suppress the response on multi-decadal time scales to justify a model with well-separated fast and slow responses, then a diffusive starting point might be more parsimonious. Ill try to return to this topic in a future post.

I realize that a theoretical discussion like this, in which I haven’t confronted the model with data on the response to Pinatubo, strikes some readers as unbalanced. But I think we need a theoretical framework to think about how the volcanic responses and TCR are related, from which vantage point we can then think about the implications for TCR estimates of any discrepancies between modeled and observed volcanic responses.

(Thanks to several colleagues, especially Mike Winton, Tim Merlis, and Rong Zhang, for discussions on this topic.)

**Some results on the response of a GCM (GFDL’s CM2.1) to instantaneous doubling or halving of CO _{2} (left) and to an estimate of the stratospheric aerosols from the Pinatubo eruption. From Merlis et al 2014.**

The following is based on the recent paper by * Merlis et al 2014* on inferring the Transient Climate Response (TCR) from the cooling due to the aerosols from a volcanic eruption. The TCR is the warming in global mean surface temperature in a model at the time of doubling of CO

Another simulation that has become standard for models is to just double (or quadruple) the CO_{2 }instantaneously and watch the system equilibrate. This gives you more information about the various time scales involved in the equilibration. For CM2.1, the upper left panel shows the evolution over the first 20 years (this is an ensemble mean over 10 realizations with different initial conditions taken from different times in a control run). It also shows a fit with a function of the form:

with the radiative forcing due to doubling of CO_{2} (3.5 W/m^{2} here — all radiative forcings are computed by holding SSTs fixed, perturbing the system, letting the atmosphere+land equilibrate, and examining the imbalance at the top-of-atmosphere), 1.45K and 2.8 years*. *In previous posts, I have discussed how one can interpret this short-time scale response in terms of a simple box model for the surface layers of the ocean,

where is the radiative forcing, is the strength of the radiative restoring, taking into account all of the radiative feedbacks, and is the heat uptake into the deeper layers of the ocean. If we set with the *heat uptake efficiency*, the heat uptake acts as an additional negative feedback. We then have .

The figure also shows the mean of an ensemble of runs with an instantaneous reduction in CO_{2} by a factor of 2. There is a small difference in the ensemble mean, marginally significant at the 10% level, between these warming and cooling switch-on simulations, with 1.35K fitting the cooling case. (We checked that the radiative forcing is almost exactly logarithmic in the model, so we can use the same forcing for doubling and halving). This qualitative result might be expected from the picture that cooling at the surface reduces the gravitational stability of the water column, increasing the heat uptake efficiency. But the difference between warming and cooling is small on this time scale, which is nice from the perspective of using a cooling perturbation like a volcanic eruption to infer the transient response to warming.

The model is still taking up heat at about 1 W/m^{2} after 20 years (lower left panel). This model’s equilibrium climate sensitivity is about 3.4K, but it approaches this equilibrium very slowly. Fitting the evolution over longer time scales with a sum of two exponentials,

we get something like and 400-500 years. This large gap in time scales is clearly the best possible situation if you want to infer a response on the time scale of 50-100 years from the response to a much shorter time-scale forcing. See * Geoffroy et al 2013* to place the shape of this response function in the context of that found in other GCMs.

As a first approximation to a volcano, we can set to be a spike, a -function, with the result

where is the integral of the volcanic radiative forcing, , which we might call the volcanic radiative impulse. The simple but important point to notice here is the appearance of the factor . If the magnitudes of the temperature responses to a step increase in forcing on the fast and slow time scales are comparable, the response to impulsive forcing will be much smaller on the long time scale, by the ratio . The long weak tail of the volcanic response has been discussed by **Wigley et al 2005***. *** ****Delworth et al 2005**** **and

To the extent that one is able to focus on the fast response in isolation we can average over time, returning to the single box interpretation if you like,

where is a time long compared to the fast decay and short compared to the slow decay. The setup for computing TCR involves linearly increasing radiative forcing (since this forcing is logarithmic in CO_{2}) for 70 years. For a two-box model mimicking CM2.1, this results in a TCR very accurately given by . So the estimate of TCR provided by the volcano is

This integral method does not involve an estimate of the time scale of the fast response.

Merlis et al piggyback on the ensemble of simulations of the response in CM2.1 to the Pinatubo eruption described in * Stenchikov et al 2009* which used an ensemble of 20 runs, 10 initialized during an El Nino event in the model’s control simulation and 10 initialized in La Nina events. (Pinatubo occurred during an El Nino, so it is of interest if this modifies the forced response to the volcano. The response is nominally a bit larger in the La Nina ensemble mean, but larger ensembles would be needed to quantify this difference.) The Pinatubo forcing in this model is shown as the light blue line in panel d above. It’s not a -function, but its duration is less than the model’s dominant fast response time. The volcanic radiative impulse is -6.5 Wm

The temperature response is shown in panel c. The ensemble mean integrated response up to year 20 is 2.35K-yrs. This gives an estimate of TCR of 1.3K. This is close to the models TCR of 1.5K but a little low. The figure also shows the fit that you get with this one-timescale model, constraining it to fit the integral of the response and using the time scale from the instantaneous doubling simulation. You can also fit the volcanic response varying the two paramaters and simultaneously. This two-parameter fit gives an estimate of TCR that is smaller still — about 1.1K. The single time scale model is not a perfect fit to the GCM response.. One can understand the sensitivity to fitting procedure qualitatively if you assume, for example, that the fast response in the GCM actually occurs on two time scales — let’s say 1 year and 4 years, conserving the sum of these two responses and playing with their ratio.

Since the model’s TCR is 1.5K, the underestimate 1.1K is not trivial. It is the sum of little things in this model — the slight difference between warming and cooling perturbations, a small effect in this model of time scales longer than the dominant fast response time, plus a distortion due to the fitting procedure when the fast response itself is not well fit with a single time scale.

*We do not need to estimate the separate effects of radiative feedbacks and heat uptake, or and , to estimate TCR in this way, and there is no need to refer to equilibrium climate sensitivity.*

We have used 20 realizations of the response to Pinatubo to get these results. How is this relevant to the problem of determining TCR from a single realization (and without a no-volcano control)? Merlis et al describes what you get if you take one realization of CM2.1, remove the average of the 10 years before the eruption and also remove an estimate of the ENSO contribution based on the relationship between global mean temperature and NINO3.4 SSTs in this GCM. You have to do something like this to get any meaningful results from a single realization, and we don’t claim that this is optimal . We also find it difficult to use the integral method with single realizations, so use the two parameter fitting procedure that results in 1.1K using the ensemble mean response. We get the following:

The whiskers span the entire range of values obtained from the 20 realizations, the box represents the middle half (25-75%) and the red line the median (with the red and blue dots corresponding to the La Nina and El Nino ensembles). The median is close to the “correct” value of 1.1K for the two-parameter fit. The blue dotted line indicates the value inferred from fitting to the fast response in the 0.5X instantaneous cooling simulation. My suspicion is that this spread is too large, partly because the interanual variability of global mean surface temperature in this model is too big, mostly due to too large an ENSO amplitude — and partly because you can probably do better than this with a better algorithm, possibly multivariate, for isolating the volcanic signal in a single realization. Even with this much uncertainty, this would be useful as one piece of information among others, if coupled to some theoretical guidance for the bias involved.

There’s the rub, I think — because this underestimate could be much larger in reality than in CM2.1 if intermediate time scales play a larger role than they do in this particular model. I’ll return to this issue in Part II.

This is a continuation of the discussion in the previous post regarding the increase in water vapor near the surface and within the boundary layer more generally as the ocean surface warms. Models very robustly maintain more or less constant relative humidity in these lower tropospheric layers over the oceans as they warm, basically due to the constraint imposed by the energy balance of the troposphere on the strength of the hydrological cycle, and the tight coupling between the latter and the low level relative humidity over the oceans. Do we have observational evidence for this behavior? The answer is a definitive yes, as indicated by the plot above of microwave measurements of total column water vapor compared to model simulations of the same quantity. These are monthly means of water vapor integrated in the vertical. The observations are * RSS Total Precipitable Water Product V7*. The model is the 50km resolution version of GFDL’s HiRAM discussed in previous posts (on hurricane simulations , MSU trends , and land surface temperature trends) which uses observed SSTs and sea ice extent from

This is not asking a lot of the model. One gets about the same quality of fit for tropical averages by using the SSTs directly, instead of the model’s water vapor, and multiplying by 7%/C, roughly the fixed relative humidity value — see Fig. 7 on this * RSS page* for example, or

The observational estimate of the observed total water vapor over this tropical ocean domain is 41.75 Kg/m^{2}. In the model, averaging over the 3 realizations I get 40.51, with negligible standard deviation across the ensemble. Putting aside any issues with absolute calibration of the satellite sensors and more mundane things such as consistent land-sea masks, let’s accept that the model is biased low 3%. This is of the same magnitude as the trends over the satellite era! Should we trust the result at all? Perhaps the radiative cooling is a bit too strong in the model’s troposphere, causing near-surface humidity to drop a bit so as to supply the required evaporation. Or maybe the boundary layer is 3% too shallow, reducing the vertical integral. Or even more likely, the bias if real is due to a combination of these and other small model deficiencies. Does this bias matter?

I have a hard time seeing how it does matter. We are admittedly typically interested in the absolute increases in water vapor, not the fractional increase. If the fractional changes in water with surface warming are ok, as the figure suggests, this bias suggests that the change in water vapor would also be underestimated by 3% — that is by 3% of 7%, or 6.8% rather than 7% per degree. I think we can agree that there are bigger fish to fry.

Absolute biases of this kind are easy to find in models, and are often the target of critics. But you have to have a cogent argument why a particular bias matters. There are some absolute biases that do matter, of course, but comparison of the size of the bias to the size of the response in question may not be the most relevant criterion for whether a bias matters or not.

Water vapor feedback is only weakly related to the vertically averaged water vapor discussed here. There are some frequencies, particularly those associated with what is known as the water vapor continuum, where the infrared emission from the lower troposphere reaches the tropopause, and a lot of the feedback due to solar absorption by water vapor comes from optically thin lines as well, but these don’t add up to a major fraction of the full water vapor feedback that you get from a model that maintains more or less constant relative humidity in the upper as well as lower troposphere. Do these results indirectly increase our confidence that the physics of the tropical upper tropospheric water vapor feedback is well simulated in our models?

I don”t think so. At a given level in the tropics above the boundary layer, water vapor concentrations are more closely related to what is going on above, not below, this level. Water is saturated by upward motion in the tropics, but most of this upward motion takes place in a small fraction of the total tropical area — even if this area changed it would not change area averaged relative humidity that much — and in these regions clouds tend to prevent the infrared emission by water vapor from reaching the tropopause anyway. What matter more is the humidity in the non-convecting drier areas. The relative humidity in those areas is determined by the previous history of air parcels arriving at the level in question, to first approximation. What was the temperature (and the pressure) at the higher levels at which these parcels were last saturated? This saturation event sets the mixing ratio of water vapor to dry air that is then conserved as the parcel descends. This has little to do with the humidity in the lower troposphere that dominates the vertical integral.

To finish up, here is a plot analogous to the one at the top, but for the Northern extratropics, from 20N to 45N, once again over oceans only. I have used a 5-month (1-2-3-2-1) smoother here to knock down the noise for aesthetic reasons.

(The model bias is about the same here as in the tropics, about 2-3% low.) The agreement is pretty good here as well. This region is interesting because this is where midlatitude eddies are transporting water polewards systematically, with poleward moving moist air and equatorward moving dry air. The increase in poleward transport due just to this increase in vapor, without any change in the eddies themselves, causes an increased poleward transport of water. The divergence of this transport must be balanced by evaporation (E) minus precipitation (P) – so the eddies, by sucking water out of the subtropics, are reducing P -E there, simultaneously increasing P-E on the poleward side of the storm tracks — qualitatively consistent with the salinity trends discussed in post #14. (This figure also provides a slightly different perspective on the hiatus, without as strong an influence of ENSO variability.)

**The change in near surface relative humidity averaged over CMIP5 models over the 21st century in the RCP4.5 scenario. Dec-Jan-Feb is on the left and June-July-Aug on the right. From Laine et al, 2014**.

We expect the amount of water vapor in the atmosphere to increase as the atmosphere warms. The physical constraints that lead us to expect this are particularly strong in the atmospheric boundary layer over the oceans. The relative humidity (RH — the ratio of the actual vapor pressure to the saturation value) at the standard height of 2 meters is roughly 0.80 over the oceans. At typical temperatures near the surface, the fractional increase in the saturation vapor pressure per degree C warming is about 7%. So RH would decrease by about the same fraction, amounting to roughly 0.06 per degree C of warming if the water vapor near the surface did not increase at all. Why isn’t it possible for RH to decrease by this seemingly modest amount?

The figure shows what CMIP5 models predict will happen to RH near the surface by the end of the present century in the RCP4.5 scenario. In this scenario, which requires major mitigation efforts by mid-century, these models warm the tropics by about 1.6C on average, so fixed vapor concentrations would result in a decrease in RH of about 0.10. (I am avoiding expressing RH as a percentage to avoid having to talk about percentages of percentages.) The figure, from * Laine et al 2014*, shows RH over the oceans

To understand the first order picture, we need two pieces of information, one regarding the global energy balance of the troposphere and other regarding how the strength of the global hydrological cycle is related to near-surface RH.

The tropospheric energy balance to first order is a balance between radiative cooling and the release of latent heat when water vapor condenses. In the global mean there is roughly 80 W/m^{2} of latent heating. The change in this number in global climate models is typically only 1 or maybe 1.5 W/m^{2} increase per degree C warming in 1%/yr transient CO2 simulations (* Pendergrass and Hartmann 2014*), or at most 2% per degree C warming. Pendergrass and Hartmann provide a nice deconstruction of this number.

The second point to appreciate is that the evaporation is controlled by the degree of sub-saturation of the air near the surface — roughly speaking by (1-RH) rather than RH itself. The air in contact with the ocean surface is saturated and it is the gradient in the concentration of water vapor between this surface air and the air near the surface that drives evaporation. If the relative humidity at the reference level is 0.80, the sub-saturation, 1-RH, is 0.20 and a reduction in relative humidity from 0.8 to 0.7 (as would be consistent with fixed vapor concentration in the warming simulation pictured above) would result in a 50%(!) increase in (1-RH). A 50% increase in evaporation is obviously ruled out by energy balance requirements. So we expect small changes in RH near the surface as the climate warms.

More precisely, evaporation over the oceans can be approximated by the “bulk formula”

Here and are the saturation humidities at the ocean surface and reference level temperatures respectively, and are the relative humidity and wind speed at this reference level, the atmospheric density and a non-dimensional constant. A lot of physics and a lot of empirical evidence has been stuffed into the constant , guided by what is affectionately known as * Monin-Obukhov similarity theory.* (All global climate models compute surface fluxes using Monin-Obukhov scaling as the starting point.) depends on the height of the reference level, some properties of the surface (specifically surface “roughnesses”), and the gravitational stability of the atmosphere near the surface, which in turn is strongly coupled to the air-sea temperature difference.

If we ignore the air sea temperature difference as well as changes in wind speed and , then we just have . If the specific humidity does not change, then the large fractional reduction in results in a huge increase in evaporation, as discussed above. But it even worse than that, because will also increase by about 7%/C on top of the effect of the change in .

Can the other factors in the expression for evaporation compensate somehow? The changes in tropical weather would have to be profound to produce reductions in average wind speed large enough to compensate for a such a large increase in . Fortunately, no models even hint at such profound changes. We can rewrite the expression for the evaporation as

For the term proportional to to compensate for the large reduction in RH this air-sea temperature difference would have to change sign, since the temperature difference is small — only +1 to +2C over the tropical oceans. But this temperature difference is itself constrained by an energy balance argument, as discussed by Betts and Ridgeway. [Due to mixing of water vapor in the turbulent boundary layer, the specific humidity is relatively homogeneous with height in this layer while temperatures decrease with height, so we often reach a point at which saturation occurs within the boundary layer, the cloud base. Latent heat release comes into play only above this level; something has to balance the radiative cooling below cloud base and it is the sensible heat transfer from the surface, proportional to the air-sea temperature difference, which has to pick up the slack.] And it is also extremely implausible that the value of C could cause this magnitude of an adjustment in evaporation (the easiest way of changing C is to change the air-sea temperature difference a lot). Something very dramatic would have to happen in the tropical atmosphere to avoid the constraint that near surface water vapor over the ocean must increase as the surface warms to maintain nearly constant relative humidity.

As for the second order picture, the small increase in RH over the oceans, note that the term would result in an increase in evaporation of 7% per degree C warming *even if the relative humidity were fixed, *and that this increase is already too large to be consistent with the energy constraint. * *An increase in RH of about 0.01, that is, a decrease in 1-RH of about 5%, is about the right order of magnitude to restore consistency. This seems to be part of what is going on in the CMIP5 composite at the top of the post. But now the changes are small enough that reduction in average wind speed and modest change in air-sea temperature difference could also play a role, as they seem to do in models. However, the models do seem to take advantage of the simplest way of throttling back the evaporation — a small increase in RH.

This near surface relative humidity is not just relevant for the lowest few meters of the atmosphere, since these near surface values are coherent with the humidity of the entire planetary boundary layer — the lowest 1-2 kms of the troposphere — because of the strong turbulent mixing throughout this layer. While the boundary layer is not where most water vapor feedback originates, it does contain a large fraction of the mass of water vapor. The increase in total mass of water vapor with warming has lots of consequences — for example, for the increase in the amplitude of the pattern of evaporation minus precipitation discussed in Posts #13 and #14.

**Evolution in time of fluxes at the top of the atmosphere (TOA) in several GCMs running the standard scenario in which CO _{2} is increased at the rate of 1%/yr until the concentration has quadrupled**.

A classic way of comparing one climate model to another is to first generate a stable control climate with fixed CO_{2} and then perturb this control by increasing CO_{2} at the rate of 1%/yr. It takes 70 years to double and 140 years to quadruple the concentration. I am focusing here on how the global mean longwave flux at the TOA changes in time.

For this figure I’ve picked off a few model simulations from the CMIP5 archive (just one realization per model), computed annual means and then used a 7 yr triangular smoother to knock down ENSO noise, and plotted the global mean short and long wave TOA fluxes as perturbations from the start of this smoothed series. The longwave () and shortwave () perturbations are both considered positive when directed into the system, so is the net heating. The only external forcing agent that is changing here is CO_{2}, which (in isolation from the effects of the changing climate on the radiative fluxes) acts to heat the system by decreasing the outgoing longwave radiation (increasing ). *But in most of these models,** L is actually decreasing over time, cooling the atmosphere-ocean system*. It is an increase in the net incoming shortwave () that appears to be heating the system – in all but one case. This qualitative result is common in GCMs. I have encountered several confusing discussions of this behavior recently, motivating this post. Also, the ESM2M model that is an outlier here is very closely related to the CM2.1 model that I have looked at quite a bit, so I am interested in its outlier status.

Since the radiative forcing due to CO_{2} is logarithmic over this range, the radiative forcing increases linearly in time. Global mean surface temperatures also increase roughly linearly in time, as does the heat uptake , as seen in the following:

Another reason that I am interested in this comparison is that ESM2M has a low transient climate response ( warming at the time of doubling) that I like for a variety of reasons.

When thinking about this sort of thing, I tend to start with the energy balance of the ocean mixed layer, the surface layer of the ocean that is well-mixed by turbulence generated at the surface. Globally averaged we can think of this layer as being something like 50m deep, providing a heat capacity that is more than an order of magnitude larger than the atmosphere. Ignoring the latter, we can think of as heating this layer directly. This surface layer is cooled by transfer of heat to deeper layers of the ocean:

On the time scales of interest here the heat capacity of this layer is itself negligible and we can ignore the time-derivative in this equation, so that . For small perturbations, I’ll assume that where is the CO_{2} forcing and is the sensitivity of the longwave flux to temperature. We could also write in general, but in this case of CO_{2} forcing only , is small and we can think of as pure feedback.

Importantly for this discussion, I am also going to write . is referred to as the efficiency of the heat uptake — the heat uptake per unit global warming. This allows us to define a transient climate response very easily — solving for :

In previous posts, I have referred to the time scales of the forcing for which this is a useful first approximation as the intermediate regime (this hasn’t caught on — maybe I should try something else) — intermediate between the faster time scales (due to volcanoes for example) for which the heat capacity of the mixed layer is important and the slower time scales over which the deeper ocean starts to equilibrate.

With these sign conventions, and are positive, while is negative if shortwave feedback is positive (sorry). If all of these coefficients are constant in time over these 140 years of simulation, and given our other approximations, we expect and to both increase linearly in time, as is roughly the case in these models. (Actually, is typically a bit concave upwards while is a bit concave downwards, but I think the simplest model is adequate here even if it can only fit these curves to the extent that they are linear in time.) Solving for ,

Whether increases or decreases in time — that is, whether the forcing wins or the response to increasing temperatures wins — depends on the sign of . If the positive shortwave feedback is larger in magnitude than the efficiency of the heat uptake, decrease as increases. To create this counterintuitive behavior the short wave feedback does not have to compete with . It need only compete with . Averaging over the models (leaving aside ESM2M), and looking at the values averaged over years 60-80, at the time of doubling, I get . It is closer to 0.9 in ESM2M. The corresponding mean value of is about -0.85 (and -0.3 in ESM2M). Assuming that at the time of doubling is 3.5 W/m^{2}, I get (with ESM2M roughly 2.2, so nothing special there.)

Most of the spread among models in the shortwave feedback is undoubtedly due to clouds, but there is a non-cloud related background positive shortwave feedback –partly due to surface (snow and ice) albedo feedback and partly due to positive short wave water vapor feedback. The latter does not get mentioned much because it is often lumped together with the larger infrared feedback, but it accounts for something like 15% of the total water vapor feedback (water vapor absorbs solar radiation, reducing the amount of solar radiation reaching the surface, so more vapor mean means less reflection from the surface and less loss of energy to space through this reflection.) The surface albedo and water vapor shortwave feedbacks are probably enough in themselves to compete with . In ESM2M negative short wave cloud feedbacks bring the magnitude of down and is relatively large, resulting in the intuitive response – the outgoing longwave decreasing with time with increasing CO_{2}.

(The following paragraph corrected on June 1, 2014.) Although it is not directly relevant to the simulations described above, it is interesting to consider the special case in which there is some positive solar forcing added to the positive longwave forcing. For simplicity, let’s just assume that there is no shortwave feedback, so (we still have long wave feedback of course). The temperatures will increase if is positive, and this warming must be due to positive in our simple model (assuming once again that we are in the intermediate regime). But is it or that looks like it is causing the warming? A manipulation similar to that above shows that .So if the shortwave forcing is larger than times the longwave forcing — this ratio is something like 25% in the main group of models that we looked at above — the system is being heated by the shortwave rather than the longwave flux even though the shortwave forcing might be much smaller than the longwave forcing.

I guess the moral here, if there is one, is that it is useful to have an explicit model in mind, however simple, when thinking about the Earth’s energy balance and its relationship with surface temperature.

In the late 80’s, Mark Cane, Steve Zebiak and colleagues wrote a series of papers – **Zebiak and Cane 1987*** * is one of the first – about a simple oscillatory atmosphere-ocean model of the tropical Pacific, with the goal of capturing the essence of ENSO evolution and providing dynamical predictions of ENSO. In

It is easy enough to understand why the Cane-Zebiak model tilts towards la Nina as it warms. On the ocean side, this is a model of the waters above the thermocline in the equatorial Pacific. Crucially, the temperature of the water upwelling into this layer from below is fixed as a boundary condition. Most of the upwelling occurs in the eastern Pacific. When the waters of the surface layer are warmed, the upwelling of water from deeper layers, assumed to be unaffected by the warming, retards the warming in the East but not the West, increasing the east-west temperature gradient across the Pacific. One can then envision the basic mechanism underlying ENSO kicking in to enhance the temperature gradient. Known as the Bjerknes feedback, a stronger east-west temperature gradient generates a precipitation distribution (more rain in the west, less in the east) that enhances the strength of the trade winds along the equator, pushing surface waters westward and enhancing the upwelling of cold waters in the east. The manner in which different negative feedbacks then develop due to slower transfers of heat between equatorial and off-equatorial waters is a main focus of ENSO theory, and these complicate matters, but presumably you can still think of la Nina conditions as being favored by upwelling waters that have not yet experienced warming.

The path taken by this water that upwells in the eastern Pacific is intricate. The major pathway is part of the shallow wind-driven overturning circulation. Subduction and last contact with the surface is primarily in the subtropics, mostly in the eastern half of the basin from where water masses can more easily drift westward and equatorward, typically reaching the western boundary first, where they proceed equatorward below the surface, eventually feeding the equatorial undercurrent which rises as it moves back eastward, mixing with surface waters in the east. An early paper describing the theory and modeling of this circulation is * McCreary and Lu, 1994*. See also the schematic in Fig. 3 of

Waters subducted further polewards than the subtropics can also move equatorward and get caught up in the equatorial undercurrent and coastal (Peruvian) upwelling. Radiocarbon in tropical corals – * Toggweiler et al 1991 *- suggests that these denser source waters come from as far away as the Southern ocean north of the circumpolar current. This would lengthen the time lag, and maybe make it more plausible that the subsurface plumbing that emerges in our ocean models might be deficient.

Models don’t typically generate a la Nina like forced response, as seen in the figure at the top. The discrepancy does not just affect the usual hiatus period, the past 15 or so years, but as shown in the figure it affects trends over the full satellite era (causing the discrepancy between models’ and satellite (MSU) estimates of tropical tropospheric warming trends among other things). One possibility of course is that internal variability is the cause of this discrepancy between the observed and the forced component of the model trends. But the question here is whether the models could be missing a la Nina-like tendency in their *forced* responses.

In* Held et al 2010*, we tried to separate the response in our CM2.1 model, (in an ensemble of 20th century +A1B scenario simulations with stabilization of forcing agents after 2100) into fast and slow components with different spatial structures. We did this by returning all anthropogenic forcing agents to their pre-industrial values instantaneously at three times (2100, 2200, 2300). In response there is a fast cooling with e-folding time of just a few years, followed by a much slower “recalcitrant” cooling back to the pre-industrial climate. The slow part is computed by looking at what’s left 20 years following the instantaneous return to pre-industrial forcing, long enough for the fast part to have decayed away. The slow component can be thought of as the effect of the warming of the sub-surface waters on surface temperatures. The upshot is that the temperature response at any time is decomposed into two components, . The patterns and are normalized to integrate to unity over the sphere, so that the global mean temperature is . Post #8 discusses the magnitudes of these two components. Their spatial patterns and are shown below . (We didn’t try to estimate the slow part at 2100 because its amplitude is too small to get a good handle on its spatial structure — we were only using a single realization.)

The fast part resembles la Nina, with larger warming in the western than in the eastern tropical Pacific. The slow part provides a complimentary El-Nino like pattern, more or less as one would expect from the dynamic retardation argument of Clement et al (I am avoiding the word “thermostat” because this mechanism is not maintaining a particular temperature.) You also get the sense of the different tropical responses imprinting themselves on the North Pacific as expected from the known responses to ENSO. I am not sure why this distinction between the equatorial Pacific structure of the fast and slow responses shows up clearly here and not so clearly within the 20th century part of these simulations, which should be dominated by the fast response. (The oversimplification of there being only two effective time scales is probably to blame — ie, some of the equatorial response in the slow component may not be as slow as the global mean recalcitrant component discussed in post #8.) I am pretty confused about the whole range of issues related to forced responses and free multi-decadal variability in the tropical Pacific. But maybe there is something to the simple idea that when warming starts kicking in rapidly enough, the eastern equatorial Pacific holds it back temporarily.

]]>

For the forced component, there is a 3-way balance between forcing , heat uptake , and the radiative restoring proportional to the temperature response, , with strength inversely proportional to the climate sensitivity . Here and in what follows, is the change in forcing over the interval considered, so is the usual sensitivity scaled by When I refer to TCR in the following, it is also normalized in the same way. So TCR is simply the forced response in global mean temperature . is positive into the ocean. For starters, I’ll ignore the question of the *efficacy* of oceanic heat uptake.

The key assumption is that the relation between global mean temperature and the energy balance of the earth is the same for both the forced and internal components. So an internally generated perturbation in the global mean temperature is accompanied by an increase in the net outward flux at the TOA of .

Set and similarly for the heat uptake . We can write the heat uptake in the forced response in terms of the equilibrium sensitivity and the :

So, adding the forced and internal components for the heat uptake:

It is the full that enters here. The heat flux is* into* the ocean if the equilibrium response is larger than the observed temperature perturbation.

This expression is transparent to the relative magnitude of the forced and free parts of . For this purpose, as in post #16, we can rewrite so that is the fraction of the temperature anomaly that is forced. And we get

One can write this in different ways (the way I chose in #16 being particularly obscure). We can just leave it is this form, from which we see that if the heat flux is into the ocean we must have (given all of our assumptions);

This all seems reasonable, but now let’s go back and re-examine our key assumption that an internal variation in temperature perturbs the TOA budget by an amount , with the same value of that occurs in the forced response. Why should it be the same constant of proportionality, especially if the internal variability has a different spatial structure than the forced response. So how do we relate the strength of this “restoring force” for internal variability to its strength for the forced response? Before getting back to this, we need to reintroduce the notions of efficacy of heat uptake.

For the forced response, when we try to emulate the behavior of GCMs, we find that we need to replace the expression with

The efficacy of heat uptake is defined as and is almost always larger than one when emulating GCMs – see Post #5 and * Winton et al (2010). *This is because the response to heat uptake is typically more polar amplified than the equilibrated response to the forcing, and perturbations at higher latitudes are restored less strongly by radiation to space than those at lower latitudes. So you get more bang for your buck by forcing at high latitudes. (Different parts of the forcing can have different efficacies as well, which is the sense in which this term was first used in this context, but I’ll ignore that here.) For a recent example of papers on this, see

Just as for the forced heat uptake, it is natural to expect the radiative restoring of low frequency internal variability to be weaker than that relevant for the equilibrium forced response. Both the forced heat uptake and the low frequency variability involve coupling to deeper ocean layers and this coupling is strongest in subpolar regions. So could it be the case that the restoring for low frequency variability resembles ? It might be interesting to see where the assumption leads. Setting, , we have and

So we still have the result that positive heat uptake implies an equilibrium response over the time period in question (ie a temperature change over this period computed by assuming no heat uptake) that is larger than the actual temperature change. Expressing this in terms of the transient response we once again get the result that to be consistent with positive heat uptake we need . When efficacy is not equal to one, the assumption that saves these intuitive and simple expressions.

Does hold in GCMs? How does the strength of the radiative restoring resulting from low frequency internal variability relate to that in the model’s response to heat uptake in the forced response? The smaller the weaker the constraint on . There is no reason to expect close agreement; there are undoubtedly different parts of the internal variability — focused on Northern compared to Southern subpolar latitudes, for example — that could be damped differently. But it would be interesting if was at least correlated with across models. I am not aware of any papers that have looked at this.

**Animation of near-surface wind speeds in rotating radiative-convection equilibrium, following Zhou et al, 2014. **

I have discussed models of non-rotating radiative-convective equilibrium (RCE) in previous posts. Given an atmospheric model one idealizes it by throwing out the spherical geometry, land-ocean configuration and rotation, creating a doubly-periodic planar geometry re-entrant in both x and y, while also removing any horizontal inhomogeneities in the forcing and boundary conditions. In the simplest case, surface temperatures are specified and the surface is assumed to be water-saturated. The result is an interesting idealized explicitly fluid dynamical system for studying how the climate — especially that of the tropical atmosphere — is maintained by a balance between destabilization through radiative fluxes and stabilization through turbulent moist convection. There is a lot that we don’t understand about this setup, which still contains all of the complexity of latent heat release and cloud formation. But even though we don’t understand the non-rotating case very well, it is interesting to re-introduce rotation while maintaining horizontal homogeneity. Adding rotation has a profound influence on the results — the model atmosphere fills up with tropical cyclones! Some colleagues * *suggest referring to this system as

You can include rotation while retaining horizontal homogeneity by adding a Coriolis force of fixed strength, independent of latitude. In fact, one typically ignores the vertical component of the Coriolis force and simply adds terms to the horizontal equations of motion that, in isolation, would cause the horizontal winds to rotate at a fixed rate , the *Coriolis parameter*. (This geometry is referred to as the *-plane* in textbooks and articles on geophysical fluid dynamics.).

Wenyu Zhou has been studying Rotating RCE in collaboration with several of us at GFDL. The first paper on this work is ** Zhou et al, 2014. ** The animation above is the near-surface wind speed from one of the simulations analyzed in this paper. Red corresponds roughly to hurricane strength winds. with latitude and is the magnitude of the angular velocity of the Earth. Surface temperatures are fixed at 300K. A month of simulation is shown, after several months of equilibration starting from an initial condition with no TCs present.

In studies of RCE, we often push the horizontal grid down to 1 or 2 km to help in explicitly simulating at least the largest convective plumes that extend to the tropopause. In this paper we use a much coarser resolution, 25km, to the consternation of some reviewers — we simply take a global atmospheric model with 25 km resolution and place it in this idealized -plane geometry. The model includes a a sub-grid closure scheme for moist convection. The number of grid points in the horizontal is 800×800, producing a 20,000km square domain. This is not meant as a model of a little patch of the atmosphere! We are, of course, interested in how a model with 1 or 2 km grid would behave, but that would be computationally expensive for us even in a smaller domain barely large enough to contain a few storms — we want to have enough storms in the domain that we can study things like how the average spacing between storms varies with rotation rate or SST (this distance increases with decreasing f and with increasing SST.) But we are also especially interested in how our global model, which simulates the geographical and seasonal distribution of TC genesis rather well (post #2), behaves in this idealized geometry. I was involved in an earlier paper taking the same approach of placing a global model in this idealized -plane geometry, * Held and Zhao 2008*, but with even coarser resolution. That paper did not create much of a stir. This approach gets more interesting (and the review process becomes a bit less painful) when the global model that we start with has TC statistics that look realistic. Increasing computer power should make exploration of this kind of rotating moist-convective turbulence more common.

In a recent paper * Khairoutdinov and Emanuel 2013* have generated simulations with 3 km resolution that produce multiple storms that qualitatively resemble the result shown above. They make the computation tractable by increasing by an order of magnitude compared to Earth-like values, resulting in storms small enough that you can get into this multiple storm regime much more easily.

You can get a sense from the video that the model does produce storms with a relatively well-defined radius of maximum winds. Wenyu describes how this internal storm scale, despite our low resolution, changes systematically with model parameters. This is obviously one place where there is likely to be important sensitivity to resolution. But we also find that the size of the domain, if too small, can modify the sensitivity of this radius of maximum winds to other parameters by not allowing the storm to settle into its preferred horizontal structure. A nice comparison of theories for mature TC structure with numerical simulations, including Rotating RCE, can be found in the recent thesis of * Daniel Chavas 2013*.

The most interesting qualitative result to me is simply that in this homogeneous system the natural equilibrated state is an atmosphere filled with TCs. In reality, and in this model when run over realistic boundary conditions on a rotating sphere, TCs are very far from being so all-pervasive. This seems partly to be due to the very long lifetimes of the vortices in this model. Nearly all of the storms in the video survive over the month shown. There is very little merging of vortices. And there is, by construction, no movement of vortices over land, cutting off their energy supply, or poleward drift into midlatitudes followed by being torn apart by jets and extratropical storms. (This poleward drift is due in large part to the increase in strength of the Coriolis parameter with latitude on a rotating sphere, a gradient not present in our -plane setup.) The storms in rotating RCE just pile up, to a first approximation, until the occasional decay/merger is balanced by the occasional new storm managing to squeeze in and grab enough of the energy source at the surface.

In addition, if one can get far enough away from the influence of other storms the homogeneous environment here is always conducive to the genesis of new storms. There are no strong vertical shears of the large scale horizontal winds, or large-scale dry-air intrusions, and no SSTs that are too cold to allow convection up to the tropopause. All of these suppression mechanisms result from large-scale horizontal inhomogeneities.

Rotating RCE produces a distinctive kind of turbulence, dominated by vortices of one sign that are strongly dissipative and dependent for their survival on continuous access to their energy source. Are there analogies to turbulent flows that arise in other contexts?

Whenever setting up an idealized model like this you have to ask if detailed study would really help us understand nature. My intuition is that Rotating RCE will turn out to be very valuable — especially if we can devise clean ways of systematically reintroducing relevant inhomogeneities using the homogeneous case as a starting point.

**Zonal (east-west) wind in the lower troposphere (850mb) in two simulations with a 50km resolution atmospheric model with zonally symmetric boundary conditions. Only of longitude within the tropics (30S-30N) is shown. The ITCZ is located at in the upper panel and in the lower panel. Simulations described in Merlis et al 2013. (White, Black) => winds from the (west, east). 6 frames/day for 100 days.**

The frequency of formation of hurricanes/typhoons has mostly been studied in the past by trying to develop “genesis indices” – empirical relations between the frequency of storm genesis and the larger scale circulation and thermodynamic structure of the atmosphere. But there is an ongoing transition, picking up steam in a number of atmospheric modeling groups around the world, to using global atmospheric models that simulate hurricanes directly to study how genesis is controlled. One goal of this work is to understand how hurricane frequency responds to the warming resulting from increasing greenhouse gases. Posts #2, 10, and 33 describe some of our recent efforts at GFDL along these lines. That work uses models in a comprehensive setting, with a seasonal cycle and realistic distribution of continents. But Tim Merlis, Ming Zhao, Andrew Ballinger and I have started looking at analogous simulations with global models in more idealized settings.

The animation above is from a model described in * Merlis et al 2013*. The model has no continents, and no seasonal or diurnal cycles, and the ocean is replaced by a stationary slab of water 20 meters thick, providing some heat capacity and a source of water vapor. The temperature of the slab ocean is predicted by the model. Other than the boundary conditions and lack of seasonal forcing, the model is identical to the one that generates the simulations described in post #2 in which sea surface temperatures are prescribed.

We start with a circulation forced symmetrically between Northern and Southern Hemispheres. Tropical rainfall is then localized in an intertropical convergence zone (ITCZ) centered on the equator. No hurricanes form in this model configuration despite the fact that the model with realistic boundary conditions generates about the right number. Then, just as in * Kang et al 2008* (see post #37), we move a given amount of heat within the ocean from high latitudes of one hemisphere to high latitudes of the other hemisphere, causing the tropical rain belt to move some distance off the equator, allowing hurricanes to form. The animations above show two cases, with the ITCZ located roughly at 3N and 8N. The two runs differ only in the prescribed cross-equatorial heat transport. The sensitivity of hurricane number to perturbations in ITCZ latitude int his model is impressive — about a 40% increase per degree latitude poleward displacement of the ITCZ when the ITCZ is at 8N.

[This increase is genesis as the ITCZ is moved off the equator is related to the magnitude of the vorticity in the larger scale environment, a parameter in all empirical genesis indices. Vorticity is the curl of the velocity field. If a fluid is in solid body rotation, with angular velocity , a vector that points in the direction of the axis of rotation, the vorticity is simply . But the atmosphere is a thin shell on the surface of a sphere, so it is primarily the radial (locally vertical) component of the vorticity of the solid body rotation that the storms care about, where is latitude. vanishes at the equator and increases linearly as one moves off the equator.]

You sometimes hear the view expressed that there might be some simple, elegant theory for the average number of tropical cyclones that form per year. I guess this conviction is based on the idea that these cyclones play a fundamental role of some kind in maintaining the climate and that you need a certain number, more or less, to fill that role. (Also, the globally averaged number of tropical storms does not seem to vary much from year to year.) I don’t have much sympathy for this view, as I have never understood what this role might be. I think a more plausible hypothesis is that tropical cyclones are the tail of the dog with weak effects on the general circulation as a whole, at least in a climate at all resembling what we have now. These simulations reinforce my view on this. If boundary conditions are idealized and conditions are modified so that the climate is zonally symmetric and the ITCZ lies along the equator, no hurricanes form in the model (there are a few weak storms that spin off midlatitude fronts penetrating into the subtropics). Has the role that these storms are needed to fill somehow changed with this change in boundary conditions?

Another way of eliminating tropical storms in the model is to reduce the heat capacity of the model “ocean”, the depth of the stationary slab of water. If you take a simulation with a realistic number of hurricanes, this number decreases and eventually approaches zero as the depth of this slab ocean approaches zero. The surface that the atmosphere sees in this limit resembles a water saturated land surface — a swamp. A mature tropical cyclone is a strongly damped vortex that is continually extracting energy from the ocean. If the slab depth is too shallow, then, in response to the energy extraction, the surface cools too much to sustain deep convection. (Tropical cyclone statistics seem to converge for slab depths greater than 20m in our model.) The model’s atmospheric climate as a whole changes in only rather modest ways as this heat capacity is decreased – in the absence of a seasonal cycle.

Once we have moved the ITCZ off the equator, we then increase the model temperature with the total solar irradiance or CO_{2}. The number of hurricanes increases – about 15% per of tropical warming. This is interesting to us because the number of tropical cyclones or hurricanes tends to decrease (or remain roughly constant) with warming in most models — when they are configured with realistic boundary conditions — and this model is no different. In the idealized model the ITCZ moves further poleward with tropical warming, about 0.6 degrees latitude per . If we compensate for this poleward movement by decreasing the cross-equatorial heat flux in the ocean by just the right amount, we find that the number of hurricanes does decrease, by about 10% per tropical warming. With fixed oceanic heat transport, the increase due to displacement of the ITCZ overcompensates for this reduction.

In these particular idealized simulations, the response of hurricane frequency () to warming seems to breaks down into three different problems, each involving very different dynamical mechanisms: the dependence of ITCZ latitude on warming, that is, on an increase in insolation or CO_{2}; the dependence of on warming with fixed ITCZ latitude, and the dependence of on ITCZ latitude at fixed tropical mean temperature.

There are mechanisms relevant for storm development in more realistic climate configurations that are muted or absent in this aqua-planet setup. But even in this idealized aqua-planet model, work underway by Andrew, Tim, and Ming indicates that there are other characteristics of the tropical circulation besides the ITCZ latitude that help control hurricane frequency. So this is still work in progress.