Posted on June 13th, 2011 in Isaac Held's Blog
From Hall and Qu, 2006. Each number corresponds to a model in the CMIP3 archive. Vertical axis is a measure of the strength of surface albedo feedback due to snow cover change over the 21st century (surface albedo change divided by change in surface temperature over land in April). Horizontal axis is measure of surface albedo feedback over land in seasonal cycle (April to May changes in albedo divided by change in temperature). The focus is on springtime since this is a period in which albedo feedback tends to be strongest.
There are a lot of uncertainties in how to simulate climate, so, if you ask me, it is self-evident that we need a variety of climate models. The ensembles of models that we consider are often models that different groups around the world have come up with as their best shots at climate simulation. Or they might be “perturbed physics” ensembles in which one starts with a given model and perturbs a set of parameters. The latter provides a much more systematic approach to parametric uncertainty, while the former give us an impression of structural uncertainty— ie, these models often don’t even agree on what the parameters are. The spread of model responses is useful as input into attempts at characterizing uncertainty, but I want to focus here, not on characterizing uncertainty, but on reducing it.
Suppose that we want to predict some aspect P of the forced climate response to increasing CO2 and that we believe a model’s ability to simulate an observable O (let’s think of O as just a single real number) is relevant to evaluating the value of this model for predicting P. For the i’th model in our ensemble, plot the prediction Pi on the y-axis and the simulation Oi on the x-axis. (I am thinking here of averaging over multiple realizations if needed to isolate the forced response.) The figure shows a case in which there is rather good linear relationship between the Pi‘s and Oi‘s. So in this case the simulation of O discriminates between model futures.
Now we bring in the actual value of O — the vertical shaded region in the figure. Because the simulated value of O discriminates between different predicted values of P, the observations potentially provide a way of decreasing our uncertainty in P, possibly rather dramatically, The relationship between O and P need not be linear or univariate, but there has to be some relationship if we are to learn anything constructive about P from the observation of O. And the observed value of O need not lie in the range of model values — if the relationship is simple enough extrapolation might even be warranted. This will all seem obvious if you are used to working with simple models with a few uncertain parameters. But when working with a global climate model, the problem is often finding the appropriate O for the P of interest.
Consider the effects of global warming on Sahel rainfall. This problem grabbed my attention (and that of several of my colleagues) because our CM2.1 climate model predicts very dramatic drying of the Sahel in the late 21st century (see here and here). But this result is an outlier among the world’s climate models. In fact, some models increase rainfall in the Sahel in the future. Using different criteria, one can come to very different conclusions about CM2.1’s relative fitness for this purpose. For example, if one just looks at the evolution of Sahel rainfall over the 20th century, the model looks pretty good (the quality of the simulation is quite stunning if one runs the atmosphere/land model over the observed sea surface temperatures) — on the other hand, if one looks at some specific features of the African monsoonal circulation, this model does not stand out as particularly impressive. But no criteria, to my knowledge, has demonstrated the ability to discriminate between models that decrease and increase rainfall in the Sahel in the future.
For an example of an attempt at using observations and model ensembles to constrain climate sensitivity, see Knutti et al. 2006, who start with the spread of sensitivities within an ensemble and look for observations that distinguish high and low sensitivity models, in this case using the seasonal cycle of surface temperature, with a neural network defining the relationship. This is far from the final story, but I like the idea of using the seasonal cycle for this purpose — there is something to be said for comparing forced responses with forced responses. A closer look at the seasonal cycle of Sahel rainfall in models and observations might be warranted to help reduce uncertainty in the response of Sahel rainfall to increasing CO2. I also suspect that attempts at constraining climate sensitivity with satellite observations of radiative fluxes might also benefit from more of a focus on the seasonal cycle as opposed to, say, interannual variability.