Posted on June 7th, 2015 in Isaac Held's Blog
Given the problems that our global climate models have in simulating the global mean energy balance of the Earth, some readers may have a hard time understanding why many of us in climate science devote so much attention to these models. A big part of the explanation is the quality of the large-scale atmospheric circulation that they provide. To my mind this is without doubt one of the great triumphs of computer simulation in all of science.
The figure above is meant to give you a feeling for this quality. It shows the zonal (eastward) component of the wind as a function of latitude and pressure, averaged in time and around latitude circles. This is an atmosphere/land model running over observed ocean temperatures with roughly 50km horizontal resolution. The model results at the top (dec-jan-feb on the left and june-july-aug on the right) are compared with the observational estimate below them. The observations are provided by a reanalysis product(more on reanalysis below). The contour interval is 5m/s; westerlies (eastward flow) are red, easterlies are blue. Features of interest are the location of the transition from westerlies to easterlies at the surface in the subtropics, and the relative positions of the the subtropical jet at 200mb, the lower tropospheric westerlies and the polar stratospheric jet in winter (the latter is barely visible near the upper boundary of the plot when using pressure as a vertical coordinate).
The next plot is also of the zonal component of the wind averaged over the same two seasons, but now on the 200mb surface, close to the subtropical jet maximum near the tropopause. Features of interest here include the orientations of the Pacific and Atlantic jets in the northern winter (models often have difficulty capturing the degree of NE-SW tilt of the Atlantic jet) the secondary westerly maxima over the northern tropical oceans in northern summer (time-mean signatures of the tropical upper tropospheric troughs — TUTs) and the split in the jet over New Zealand in southern winter. The contour interval here is 10m/s.
This circulation cannot be maintained without realistic simulation of the heat and momentum fluxes due to the dominant eastward propagating midlatitude storms familiar from weather maps. These fluxes depend not just on the magnitude of these eddies but also the covariability of the eastward component of the wind (u), the northward component of the wind (v), and the temperature (T). The following plot, focusing on the winter only, shows maps of the northward eddy heat flux, the covariance between v and T, and the eddy northward flux of eastward momentum, the covariance between u and v. The latter, in particular, turns out to be fundamental to the maintenance of the surface winds and can be challenging to capture quantitatively. The plots show these fluxes only for eddies with periods between roughly 2 and 7 days (fluxes due to lower frequencies are also significant but have different dynamics and structures.) Each flux is shown at a pressure level close to where it is the largest: the eddy heat flux is largest in the lower troposphere, while the momentum fluxes peak near the tropopause. The storm tracks, marked by the maxima in the down-gradient poleward eddy heat fluxes in the lower troposphere, are accompanied by a dipolar structure of the momentum fluxes in the upper troposphere, with meridional convergence of eastward momentum into the latitude of the storm track. The eddies responsible for these fluxes have scales of 1,000 km and greater. This is what we mean by the term large-scale flow in this context.
I am using reanalysis as the observational standard for these fields, an idea that takes some getting used to. Weather prediction centers need initial conditions with which to start their forecasts. They get these by combining information from past forecasts with new data from balloons, satellites, and aircraft. These are referred to as analyses. If you took a record of all of these analyses as your best guess for the state of the atmosphere over time, it would suffer from two inhomogeneities — one due to changes in data sources and another due to changes in the underlying model that the data is being assimilated into. Reanalyses remove the second of these inhomogeneities by assimilating an entire historical data stream into a fixed (modern) version of the model. They still retain the inhomogeneity due to changing data sources over time. Where data is plentiful the model provides a dynamically consistent multivariate space-time interpolation procedure. Where data is sparse, one is obviously relying on the model more.
The multivariate nature of the interpolation is critical. As an important example, horizontal gradients in temperature are very closely tied to vertical gradients in the horizontal wind field (for large-scale flow outside of the deep tropics). it makes little sense to look for an optimal estimate of the wind field at some time and place without taking advantage of temperature data. The underlying model and the data assimilation procedure handle this and less obvious constraints naturally. Importantly, the model can propagate information from data rich regions into data poor regions if this propagation of information is fast enough compared to the time scale at which errors grow. For climatological circulation fields such as the ones that I have shown here reanalyses provide our best estimates of the state of the atmosphere. For the northern hemisphere outside of the tropics these estimates are very good — I suspect that they provide the most accurate description of any turbulent flow in all of science. For the tropics and for the southern hemisphere the differences between reanalyses can be large enough that estimating model biases requires more care.
I am claiming that the comparison to reanalyses is a good measure of the quality of our simulations for these kinds of fields. (You need to distinguish estimates of the mean climate described here from estimates of trends, which are much harder.) If you accept this then I think you will agree that the quality seen in the free-running model (with prescribed SSTs) is impressive (which does not mean that some biases are not significant, for regional climates especially). This quality is worth keeping in mind when reading a claim that atmospheric models as currently formulated are missing some fundamentally important mechanism or that the numerical algorithms being used are woefully inadequate.
I would also claim that these turbulent midlatitude eddies are in fact easier to simulate than the turbulence in a pipe or wind tunnel in a laboratory. This claim is based on the fact the atmospheric flow on these scales is quasi-two-dimensional. The flow is not actually 2D — the horizontal flow in the upper troposphere is very different from the flow in the lower troposphere for example — but unlike familiar 3D turbulence that cascades energy very rapidly from large to small scales, the atmosphere shares the feature of turbulence in 2D flows in which the energy at large horizontal scales stays on large scales, the natural movement in fact being to even larger scales. In the atmosphere, energy is removed from these large scales where the flow rubs against the surface, transferring energy to the 3D turbulence in the planetary boundary layer and then to scales at which viscous dissipation acts. Because there is a large separation in scale between the large-scale eddies and the little eddies in the boundary layer, this loss of energy can be modeled reasonably well with guidance from detailed observations of boundary layer turbulence. While both numerical weather prediction and climate simulations are difficult, if not for this key distinction in the way that energy moves between scales in 2D and 3D they would be far more difficult if not totally impractical.
I have been focusing on some things that our atmospheric models are good at. It is often a challenge to decide the relative importance, for any aspect of climate change, of the parts of the model that are fully convincing and those that are works in progress, such as the global cloud field or specific regional details (you might or might not care that a global model produces a climate in central England more appropriate for Scotland). You can err on the side of inappropriately dismissing model results; this is often the result of being unaware of what these models are and of what they do simulate with considerable skill and of our understanding of where the weak points are. But you can also err on the side of uncritical acceptance of model results; this can result from being seduced by the beauty of the simulations and possibly by a prior research path that was built on utilizing model strengths and avoiding their weaknesses (speaking of myself here). The animation in post#2 is produced by precisely the model that I have used for all of the figures in this post. I find this animation inspiring. That we can generate such beautiful and accurate simulations from a few basic equations is still startling to me. I have to keep reminding myself that there are important limitations to what these models can do.
[Note added June 10 in response to some e-mails. For those who have looked at the CMIP archives and seen bigger biases than described here, keep in mind that I am describing an AMIP simulation — with prescribed SSTs. The extratropical circulation will deteriorate depending on the pattern and amplitude of the SST biases that develop in a coupled model. Also this model has roughly 50km horizontal resolution, substantially finer than most of the atmospheric models in the CMIP archives. These biases often improve gradually with increasing resolution. And there are other fields that are more sensitive to the sub-grid scale closures for moist convection, especially in the tropics. I’ll try to discuss some of these eventually.]
[The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.]