Lanzante, John R., in press: Efficient and accurate shortcuts for calculating the extended heat index. Journal of Applied Meteorology and Climatology. DOI:10.1175/JAMC-D-24-0081.1. August 2024. Abstract
The heat index (HI), based on Steadman's model of thermoregulation, has found wide applicability for estimating human heat stress. It has proven useful in research endeavors aimed at estimating future changes associated with global climate change as well as operationally for the issuance of heat advisories by the US National Weather Service (NWS). The actual computation of the HI based on ambient temperature and humidity has been streamlined by the use of a polynomial fit by Rothfusz to Steadman's model output. Recently Steadman's model has been updated by Lu and Romps to provide applicability to temperature and humidity values that were "out-of-range" of Steadman's model. The authors of this Extended Heat Index (EHI) have provided computer code to enable application. However, because of the complexity of the model, execution of the code would be relatively costly for use in applications involving dense grids and multi-model ensembles covering decades such as those appropriate for assessing the range of possible changes in human heat stress. Here shortcuts are provided in the form of a "lookup table" and polynomials that provide computational savings of several orders of magnitude. Error analyses provide estimates of accuracy.
Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes.
Lanzante, John R., November 2021: Testing for differences between two distributions in the presence of serial correlation using the Kolmogorov–Smirnov and Kuiper's tests. International Journal of Climatology, 41(14), DOI:10.1002/joc.71966314-6323. Abstract
Testing for differences between two states is a staple of climate research, for example, applying a Student's t test to test for the differences in means. A more general approach is to test for differences in the entire distributions. Increasingly, this latter approach is being used in the context of climate change research where some societal impacts may be more sensitive to changes further from the centre of the distribution. The Kolmogorov–Smirnov (KS) test, probably the most widely-used method in distributional testing, along with the closely related, but lesser known Kuiper's (KU) test are examined here. These, like most common statistical tests, assume that the data to which they are applied consist of independent observations. Unfortunately, commonly used data such as daily time series of temperature typically violate this assumption due to day-to-day autocorrelation. This work explores the consequences of this. Three variants of the KS and KU tests are explored: the traditional approach ignoring autocorrelation, use of an ‘effective sample size’ based on the lag-1 autocorrelation, and Monte Carlo simulations employing a first order autoregressive model appropriate to a variety of data commonly used in climate science. Results indicate that large errors in inferences are possible when the temporal coherence is ignored. The guidance and materials provided here can be used to anticipate the magnitude of the errors. Bias caused by the errors can be mitigated via easy to use ‘look-up’ tables or more broadly through application of polynomial coefficients fit to the simulation results.
Statistical downscaling methods are extensively used to refine future climate change projections produced by physical models. Distributional methods, which are among the simplest to implement, are also among the most widely used, either by themselves or in conjunction with more complex approaches. Here, building off of earlier work we evaluate the performance of seven methods in this class that range widely in their degree of complexity. We employ daily maximum temperature over the Continental U. S. in a "Perfect Model" approach in which the output from a large‐scale dynamical model is used as a proxy for both observations and model output. Importantly, this experimental design allows one to estimate expected performance under a future high‐emissions climate‐change scenario.
We examine skill over the full distribution as well in the tails, seasonal variations in skill, and the ability to reproduce the climate change signal. Viewed broadly, there generally are modest overall differences in performance across the majority of the methods. However, the choice of philosophical paradigms used to define the downscaling algorithms divides the seven methods into two classes, of better vs. poorer overall performance. In particular, the bias‐correction plus change‐factor approach performs better overall than the bias‐correction only approach. Finally, we examine the performance of some special tail treatments that we introduced in earlier work which were based on extensions of a widely used existing scheme. We find that our tail treatments provide a further enhancement in downscaling extremes.
The cumulative distribution function transform (CDFt) downscaling method has been used widely to provide local‐scale information and bias correction to output from physical climate models. The CDFt approach is one from the category of statistical downscaling methods that operates via transformations between statistical distributions. Although numerous studies have demonstrated that such methods provide value overall, much less effort has focused on their performance with regard to values in the tails of distributions. We evaluate the performance of CDFt‐generated tail values based on four distinct approaches, two native to CDFt and two of our own creation, in the context of a "Perfect Model" setting in which global climate model output is used as a proxy for both observational and model data. We find that the native CDFt approaches can have sub‐optimal performance in the tails, particularly with regard to the maximum value. However, our alternative approaches provide substantial improvement.
A recent study found a downward trend from 1949-2016 in the speed at which tropical cyclones move. If this could be attributed to climate change the implications would be enormous. Slower moving storms, as exemplified by Hurricane Harvey in 2017, have the potential to produce much more rainfall than faster ones. This study fits within NOAA's mission of understanding climate variability, predicting climate change, and helping people prepare for climate change by fostering informed decisions. This study finds that the bulk of the decrease in speed is related to abrupt changes that occur in the earlier part of the period of study. Both the abruptness along with the lack of change during more recent times argues against a dominant role for climate change. The results suggest that the changes are likely due to a combination of natural climate variability and changes over time in the manner in which tropical cyclones were tracked. In particular the introduction of satellite remote sensing in the 1960s may have distorted the record by yielding more observations in areas which had previously been uncharted. It is speculated that such areas are ones where storms naturally move more slowly.
Statistical downscaling is used widely to refine projections of future climate. Although generally successful, in some circumstances it can lead to highly erroneous results.
Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, it aims to provide finer grain detail and also to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical time period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “Perfect Model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations.
We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based only on available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products as well as the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced “Perfect Model” designs than the one we have employed.
Empirical statistical downscaling (ESD) methods seek to refine global climate model (GCM) outputs via processes that glean information from a combination of observations and GCM simulations. They aim to create value-added climate projections by reducing biases and adding finer spatial detail. Analysis techniques, such as cross-validation, allow assessments of how well ESD methods meet these goals during observational periods. However, the extent to which an ESD method’s skill might differ when applied to future climate projections cannot be assessed readily in the same manner. Here we present a “perfect model” experimental design that quantifies aspects of ESD method performance for both historical and late 21st century time periods. The experimental design tests a key stationarity assumption inherent to ESD methods – namely, that ESD performance when applied to future projections is similar to that during the observational training period. Case study results employing a single ESD method (an Asynchronous Regional Regression Model variant) and climate variable (daily maximum temperature) demonstrate that violations of the stationarity assumption can vary geographically, seasonally, and with the amount of projected climate change. For the ESD method tested, the greatest challenges in downscaling daily maximum temperature projections are revealed to occur along coasts, in summer, and under conditions of greater projected warming. We conclude with a discussion of the potential use and expansion of the perfect model experimental design, both to inform the development of improved ESD methods and to provide guidance on the use of ESD products in climate impacts analyses and decision-support applications.
We analyze the relation between atmospheric temperature and water vapor—a fundamental component of the global climate system—for stratospheric water vapor (SWV). We compare measurements of SWV (and methane where available) over the period 1980–2011 from NOAA balloon-borne frostpoint hygrometer (NOAA-FPH), SAGE II, Halogen Occultation Experiment (HALOE), Microwave Limb Sounder (MLS)/Aura, and Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) to model predictions based on troposphere-to-stratosphere transport from ERA-Interim, and temperatures from ERA-Interim, Modern Era Retrospective-Analysis (MERRA), Climate Forecast System Reanalysis (CFSR), Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC), HadAT2, and RICHv1.5. All model predictions are dry biased. The interannual anomalies of the model predictions show periods of fairly regular oscillations, alternating with more quiescent periods and a few large-amplitude oscillations. They all agree well (correlation coefficients 0.9 and larger) with observations for higher-frequency variations (periods up to 2–3 years). Differences between SWV observations, and temperature data, respectively, render analysis of the model minus observation residual difficult. However, we find fairly well-defined periods of drifts in the residuals. For the 1980s, model predictions differ most, and only the calculation with ERA-Interim temperatures is roughly within observational uncertainties. All model predictions show a drying relative to HALOE in the 1990s, followed by a moistening in the early 2000s. Drifts to NOAA-FPH are similar (but stronger), whereas no drift is present against SAGE II. As a result, the model calculations have a less pronounced drop in SWV in 2000 than HALOE. From the mid-2000s onward, models and observations agree reasonably, and some differences can be traced to problems in the temperature data. These results indicate that both SWV and temperature data may still suffer from artifacts that need to be resolved in order to answer the question whether the large-scale flow and temperature field is sufficient to explain water entering the stratosphere.
Using simulations performed with 18 coupled atmosphere-ocean global climate models from the CMIP5 project, projections of Northern Hemisphere snowfall under the RCP4.5 scenario are analyzed for the period 2006-2100. These models perform well in simulating 20th century snowfall, although there is a positive bias in many regions. Annual snowfall is projected to decrease across much of the Northern Hemisphere during the 21st century, with increases projected at higher latitudes. On a seasonal basis, the transition zone between negative and positive snowfall trends corresponds approximately to the -10 °C isotherm of the late 20th century mean surface air temperature such that positive trends prevail in winter over large regions of Eurasia and North America. Redistributions of snowfall throughout the entire snow season are projected to occur – even in locations where there is little change in annual snowfall. Changes in the fraction of precipitation falling as snow contribute to decreases in snowfall across most Northern Hemisphere regions, while changes in total precipitation typically contribute to increases in snowfall. A signal-to-noise analysis reveals that the projected changes in snowfall, based on the RCP4.5 scenario, are likely to become apparent during the 21st century for most locations in the Northern Hemisphere. The snowfall signal emerges more slowly than the temperature signal, suggesting that changes in snowfall are not likely to be early indicators of regional climate change.
Santer, B D., and John R Lanzante, et al., January 2013: Identifying human influences on atmospheric temperature. Proceedings of the National Academy of Sciences, 110(1), DOI:10.1073/pnas.1210514109. Abstract
We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the fingerprint) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing.
Santer, B D., and John R Lanzante, et al., November 2011: Separating signal and noise in atmospheric temperature changes: The importance of timescale. Journal of Geophysical Research: Atmospheres, 116, D22105, DOI:10.1029/2011JD016263. Abstract
We compare global-scale changes in satellite estimates of the temperature of the lower troposphere (TLT) with model simulations of forced and unforced TLT changes. While previous work has focused on a single period of record, we select analysis timescales ranging from 10 to 32 years, and then compare all possible observed TLT trends on each timescale with corresponding multi-model distributions of forced and unforced trends. We use observed estimates of the signal component of TLT changes and model estimates of climate noise to calculate timescale-dependent signal-to-noise ratios (S/N). These ratios are small (less than 1) on the 10-year timescale, increasing to more than 3.9 for 32-year trends. This large change in S/N is primarily due to a decrease in the amplitude of internally generated variability with increasing trend length. Because of the pronounced effect of interannual noise on decadal trends, a multi-model ensemble of anthropogenically-forced simulations displays many 10-year periods with little warming. A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.
Seidel, D J., Nathan P Gillett, John R Lanzante, K P Shine, and P W Thorne, July 2011: Stratospheric temperature trends: our evolving understanding. Wiley Interdisciplinary Reviews: Climate Change, 2(4), DOI:10.1002/wcc.125. Abstract
We review the scientific literature since the 1960s to examine the evolution of modeling tools and observations that have advanced understanding of global stratospheric temperature changes. Observations show overall cooling of the stratosphere during the period for which they are available (since the late 1950s and late 1970s from radiosondes and satellites, respectively), interrupted by episodes of warming associated with volcanic eruptions, and superimposed on variations associated with the solar cycle. There has been little global mean temperature change since about 1995. The temporal and vertical structure of these variations are reasonably well explained by models that include changes in greenhouse gases, ozone, volcanic aerosols, and solar output, although there are significant uncertainties in the temperature observations and regarding the nature and influence of past changes in stratospheric water vapor. As a companion to a recent WIREs review of tropospheric temperature trends, this article identifies areas of commonality and contrast between the tropospheric and stratospheric trend literature. For example, the increased attention over time to radiosonde and satellite data quality has contributed to better characterization of uncertainty in observed trends both in the troposphere and in the lower stratosphere, and has highlighted the relative deficiency of attention to observations in the middle and upper stratosphere. In contrast to the relatively unchanging expectations of surface and tropospheric warming primarily induced by greenhouse gas increases, stratospheric temperature change expectations have arisen from experiments with a wider variety of model types, showing more complex trend patterns associated with a greater diversity of forcing agents.
Thorne, P W., John R Lanzante, T C Peterson, D J Seidel, and K P Shine, January 2011: Tropospheric temperature trends: History of an ongoing controversy. Wiley Interdisciplinary Reviews: Climate Change, 2(1), DOI:10.1002/wcc.80. Abstract
Changes in the vertical profile of atmospheric temperature have a
particular importance in climate research because climate models predict
a distinctive vertical profile. With increasing greenhouse gas
concentrations, the surface and troposphere are projected to warm, with
an enhancement of that warming in the tropical upper troposphere, and the
stratosphere is projected to cool. Hence attempts to detect this distinct
"fingerprint" have been a focus for observational studies. The topic
acquired heightened importance following the 1990 publication of an
analysis of satellite data which challenged the reality of this projected
tropospheric warming. This review documents the evolution of understanding
over the last four decades of tropospheric and stratospheric temperature
trends and their likely causes. Particular focus is given to the difficulty
of producing homogenized datasets, with which to derive trends, from both
radiosonde and satellite observing systems, necessitated by the many
systematic changes over time. The value of multiple independent analyses
is demonstrated, where alternative methods of homogenization have been
employed. In parallel with developments in the observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from climate
models -- these now include results from a wide range of models and a better
understanding of internal variability. Within the troposphere, it is
concluded that there is no reasonable evidence of a fundamental disagreement
between trends from models and observations when a comprehensive treatment
of uncertainties in both are included. Within the stratosphere, models and
observations appear to agree reasonably well although the observational and
model analyses are substantially less mature than they are in the
troposphere.
Free, M, and John R Lanzante, June 2009: Effect of volcanic eruptions on the vertical temperature profile in radiosonde data and climate models. Journal of Climate, 22(11), DOI:10.1175/2008JCLI2562.1. Abstract
Both observed and modeled upper-air temperature profiles show the tropospheric cooling and tropical stratospheric warming effects from the three major volcanic eruptions since 1960. Detailed comparisons of vertical profiles of Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, Version 2 (HadAT2) radiosonde temperatures with output from 6 coupled GCMs show good overall agreement on the responses to the 1991 Pinatubo and 1982 El Chichon eruptions in the troposphere and stratosphere, with a tendency of the models to underestimate the upper tropospheric cooling and overestimate the stratospheric warming relative to observations. Furthermore, the level of maximum stratospheric volcanic warming does not always correspond between models and observations. In addition, models and observations show a large disagreement at 100 hPa for Pinatubo in the tropics, where observations show essentially no change, while models show significant warming of ~0.7 to ~2.6 K. This difference occurs even in models that accurately simulate stratospheric warming at 50 hPa. Most models overestimate the tropospheric cooling effect of the 1963 Agung eruption in the tropics and underestimate it in the Southern Hemisphere extratropics, but uncertainties in the observations and the volcanic forcings make meaningful comparisons difficult for that eruption. Overall, the Parallel Climate Model (PCM) is an outlier in that it simulates more volcanic-induced stratospheric warming than both the other models and the observations in most cases. Results for all three eruptions are sensitive to the methods used to remove ENSO effects.
The cooling effect at the surface in the tropics is amplified with altitude in the troposphere in both observations and models, but this amplification is greater for the observations than for the models. In contrast, amplification for the ENSO signal in the models is more similar to that in the observations.
Estimates of the effect of the eruptions on temperature trends are dependent on the method used and the choice of parameters for these methods. From 1979 to 1999 in the tropics, RATPAC shows a trend of less than 0.1 K/decade at and above 300 hPa while the mean of the models used here has a trend of more than 0.3 K/decade, giving a difference of ~0.2 K/decade. From 0.02 to 0.08 K/decade of this difference may be due to the influence of volcanic eruptions, with the smaller estimate appearing more likely than the larger. In the lower troposphere, none of the difference in trends appears to be attributable to volcanic effects.
Lanzante, John R., June 2009: Comment on “Trends in the temperature and water vapor content of the tropical lower stratosphere: Sea surface connection” by Karen H. Rosenlof and George C. Reid. Journal of Geophysical Research, 114, D12104, DOI:10.1029/2008JD010542.
Lanzante, John R., and M Free, October 2008: Comparison of radiosonde and GCM vertical temperature trend profiles: Effects of dataset choice and data homogenization. Journal of Climate, 21(20), 5417-5435. Abstract PDF
Abstract: In comparisons of radiosonde vertical temperature trend profiles with comparable profiles derived from selected Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) general circulation models (GCMs) driven by major external forcings of the latter part of the twentieth century, model trends exhibit a positive bias relative to radiosonde trends in the majority of cases for both time periods examined (1960–99 and 1979–99). Homogeneity adjustments made in the Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, version 2 (HadAT2), radiosonde datasets, which are applied by dataset developers to account for time-varying biases introduced by historical changes in instruments and measurement practices, reduce the relative bias in most cases. Although some differences were found between the two observed datasets, in general the observed trend profiles were more similar to one another than either was to the GCM profiles.
In the troposphere, adjustment has a greater impact on improving agreement of the shapes of the trend profiles than on improving agreement of the layer mean trends, whereas in the stratosphere the opposite is true. Agreement between the shapes of GCM and radiosonde trend profiles is generally better in the stratosphere than the troposphere, with more complexity to the profiles in the latter than the former. In the troposphere the tropics exhibit the poorest agreement between GCM and radiosonde trend profiles, but also the largest improvement in agreement resulting from homogeneity adjustment.
In the stratosphere, radiosonde trends indicate more cooling than GCMs. For the 1979–99 period, a disproportionate amount of this discrepancy arises several months after the eruption of Mount Pinatubo, at which time temperatures in the radiosonde time series cool abruptly by ~0.5 K compared to those derived from GCMs, and this difference persists to the end of the record.
Santer, B D., and John R Lanzante, et al., October 2008: Consistency of modelled and observed temperature trends in the tropical troposphere. International Journal of Climatology, 28(13), DOI:10.1002/joc.1756. Abstract
A recent report of the U.S. Climate Change Science Program (CCSP) identified a potentially serious inconsistency between modelled and observed trends in tropical lapse rates (Karl et al., 2006). Early versions of satellite and radiosonde datasets suggested that the tropical surface had warmed more than the troposphere, while climate models consistently showed tropospheric amplification of surface warming in response to human-caused increases in well-mixed greenhouse gases (GHGs). We revisit such comparisons here using new observational estimates of surface and tropospheric temperature changes. We find that there is no longer a serious discrepancy between modelled and observed trends in tropical lapse rates.
This emerging reconciliation of models and observations has two primary explanations. First, because of changes in the treatment of buoy and satellite information, new surface temperature datasets yield slightly reduced tropical warming relative to earlier versions. Second, recently developed satellite and radiosonde datasets show larger warming of the tropical lower troposphere. In the case of a new satellite dataset from Remote Sensing Systems (RSS), enhanced warming is due to an improved procedure of adjusting for inter-satellite biases. When the RSS-derived tropospheric temperature trend is compared with four different observed estimates of surface temperature change, the surface warming is invariably amplified in the tropical troposphere, consistent with model results. Even if we use data from a second satellite dataset with smaller tropospheric warming than in RSS, observed tropical lapse rate trends are not significantly different from those in all other model simulations.
Our results contradict a recent claim that all simulated temperature trends in the tropical troposphere and in tropical lapse rates are inconsistent with observations. This claim was based on use of older radiosonde and satellite datasets, and on two methodological errors: the neglect of observational trend uncertainties introduced by interannual climate variability, and application of an inappropriate statistical 'consistency test'.
Lanzante, John R., 2007: Diagnosis of Radiosonde Vertical Temperature Trend Profiles: Comparing the Influence of Data Homogenization versus Model Forcings. Journal of Climate, 20(21), 5356-5364. Abstract PDF
Measurements from radiosonde temperatures have been used in studies that seek to identify the human influence on climate. However, such measurements are known to be contaminated by artificial inhomogeneities introduced by changes in instruments and recording practices that have occurred over time. Some simple diagnostics are used to compare vertical profiles of temperature trends from the observed data with simulations from a GCM driven by several different sets of forcings. Unlike most earlier studies of this type, both raw (i.e., fully contaminated) as well as adjusted observations (i.e., treated to remove some of the contamination) are utilized. The comparisons demonstrate that the effect of observational data adjustment can be as important as the inclusion of some major climate forcings in the model simulations. The effects of major volcanic eruptions critically influence temperature trends, even over a time period nearly four decades in length.
In addition, it is seen that the adjusted data show consistently better agreement than the unadjusted data, with simulations from a climate model for 1959–97. Particularly noteworthy is the fact that the adjustments supply missing warming in the tropical upper troposphere that has been attributed to model error in a number of earlier studies.
Finally, an evaluation of the fidelity of the model’s temperature response to major volcanic eruptions is conducted. Although the major conclusions of this study are unaffected by shortcomings of the simulations, they highlight the fact that even using a fairly long period of record (40 yr), any such shortcomings can have an important impact on trends and trend comparisons.
Lanzante, John R., T C Peterson, F J Wentz, K Y Vinnikov, D J Seidel, C Mears, J R Christy, C E Forest, Russell S Vose, P W Thorne, and N C Grody, 2006: What do observations indicate about the changes of temperatures in the atmosphere and at the surface since the advent of measuring temperatures vertically? In Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences, Karl, T R, S J Hassol, C D Miller, W L Murray, Eds, Washington, DC, Climate Change Science Program/Subcommittee on Global Change Research, 47-70. PDF
Wigley, T M., V Ramaswamy, J R Christy, John R Lanzante, C Mears, B D Santer, and C K Folland, 2006: Executive Summary In Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences, Karl, T R, S J Hassol, C D Miller, W L Murray, eds., Washington, DC, A Report by the Climate Change Science Program and the Subcommittee on Global Change Research, 1-14. PDF
Free, M, D J Seidel, J K Angell, John R Lanzante, I Durre, and T C Peterson, 2005: Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC): A new data set of large-area anomaly time series. Journal of Geophysical Research, 110, D22101, DOI:10.1029/2005JD006169. Abstract
A new data set containing large-scale regional mean upper air temperatures based on adjusted global radiosonde data is now available up to the present. Starting with data from 85 of the 87 stations adjusted for homogeneity by Lanzante, Klein and Seidel, we extend the data beyond 1997 where available, using a first differencing method combined with guidance from station metadata. The data set consists of temperature anomaly time series for the globe, the hemispheres, tropics (30°N–30°S) and extratropics. Data provided include annual time series for 13 pressure levels from the surface to 30 mbar and seasonal time series for three broader layers (850–300, 300–100 and 100–50 mbar). The additional years of data increase trends to more than 0.1 K/decade for the global and tropical midtroposphere for 1979–2004. Trends in the stratosphere are approximately -0.5 to -0.9 K/decade and are more negative in the tropics than for the globe. Differences between trends at the surface and in the troposphere are generally reduced in the new time series as compared to raw data and are near zero in the global mean for 1979–2004. We estimate the uncertainty in global mean trends from 1979 to 2004 introduced by the use of first difference processing after 1995 at less than 0.02–0.04 K/decade in the troposphere and up to 0.15 K/decade in the stratosphere at individual pressure levels. Our reliance on metadata, which is often incomplete or unclear, adds further, unquantified uncertainty that could be comparable to the uncertainty from the FD processing. Because the first differencing method cannot be used for individual stations, we also provide updated station time series that are unadjusted after 1997. The Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) data set will be archived and updated at NOAA's National Climatic Data Center as part of its climate monitoring program.
Lanzante, John R., 2005: A cautionary note on the use of error bars. Journal of Climate, 18(17), 3699-3703. Abstract PDF
Climate studies often involve comparisons between estimates of some parameter derived from different observed and/or model-generated datasets. It is common practice to present estimates of two or more statistical quantities with error bars about each representing a confidence interval. If the error bars do not overlap, it is presumed that there is a statistically significant difference between them. In general, such a procedure is not valid and usually results in declaring statistical significance too infrequently. Simple examples that demonstrate the nature of this pitfall, along with some formulations, are presented. It is recommended that practitioners use standard hypothesis testing techniques that have been derived from statistical theory rather than the ad hoc approach involving error bars.
Santer, B D., T M L Wigley, C Mears, F J Wentz, Stephen A Klein, D J Seidel, Karl E Taylor, P W Thorne, Michael F Wehner, Peter J Gleckler, J S Boyle, William D Collins, Keith W Dixon, Charles Doutriaux, M Free, Qiang Fu, J E Hansen, G S Jones, R Ruedy, T R Karl, John R Lanzante, Gerald A Meehl, V Ramaswamy, G Russell, and Gavin A Schmidt, 2005: Amplification of surface temperature trends and variability in the tropical atmosphere. Science, 309(5740), DOI:10.1126/science.1114867. Abstract
The month-to-month variability of tropical temperatures is larger in the troposphere than at Earth's surface. This amplification behavior is similar in a range of observations and climate model simulations and is consistent with basic theory. On multidecadal time scales, tropospheric amplification of surface warming is a robust feature of model simulations, but it occurs in only one observational data set. Other observations show weak, or even negative, amplification. These results suggest either that different physical mechanisms control amplification processes on monthly and decadal time scales, and models fail to capture such behavior; or (more plausibly) that residual errors in several observational data sets used here affect their representation of long-term trends.
Seidel, D J., J K Angell, A Robock, B Hicks, K Labitzke, John R Lanzante, J Logan, Jerry D Mahlman, V Ramaswamy, W J Randel, E Rasmusson, R Ross, and S F Singer, 2005: Jim Angell's contributions to meteorology. Bulletin of the American Meteorological Society, 86(3), DOI:10.1175/BAMS-86-3-403.
Sherwood, S C., John R Lanzante, and C Meyer, 2005: Radiosonde daytime biases and late-20th Century warming. Science, 309(5740), 1556-1559. Abstract PDFSupplemental
The temperature difference between adjacent 0000 and 1200 UTC weather balloon (radiosonde) reports shows a pervasive tendency toward cooler daytime compared to nighttime observations since the 1970s, especially at tropical stations. Several characteristics of this trend indicate that it is an artifact of systematic reductions over time in the uncorrected error due to daytime solar heating of the instrument and should be absent from accurate climate records. Although other problems may exist, this effect alone is of sufficient magnitude to reconcile radiosonde tropospheric temperature trends and surface trends during the late 20th century.
Free, M, J K Angell, I Durre, John R Lanzante, T C Peterson, and D J Seidel, 2004: Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets. Journal of Climate, 17(21), 4171-4179. Abstract PDF
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade−1 for 1960–97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade−1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Seidel, D J., J K Angell, M Free, J R Christy, R Spencer, Stephen A Klein, John R Lanzante, C Mears, M Schabel, F J Wentz, D E Parker, P W Thorne, and A Sterin, 2004: Uncertainty in signals of large-scale climate variations in radiosonde and satellite upper-air temperature datasets. Journal of Climate, 17(11), 2225-2240. Abstract PDF
There is no single reference dataset of long-term global upper-air temperature observations, although several groups have developed datasets from radiosonde and satellite observations for climate-monitoring purposes. The existence of multiple data products allows for exploration of the uncertainty in signals of climate variations and change. This paper examines eight upper-air temperature datasets and quantifies the magnitude and uncertainty of various climate signals, including stratospheric quasi-biennial oscillation (QBO) and tropospheric ENSO signals, stratospheric warming following three major volcanic eruptions, the abrupt tropospheric warming of 1976–77, and multidecadal temperature trends. Uncertainty estimates are based both on the spread of signal estimates from the different observational datasets and on the inherent statistical uncertainties of the signal in any individual dataset.
The large spread among trend estimates suggests that using multiple datasets to characterize large-scale upper- air temperature trends gives a more complete characterization of their uncertainty than reliance on a single dataset. For other climate signals, there is value in using more than one dataset, because signal strengths vary. However, the purely statistical uncertainty of the signal in individual datasets is large enough to effectively encompass the spread among datasets. This result supports the notion of an 11th climate-monitoring principle, augmenting the 10 principles that have now been generally accepted (although not generally implemented) by the climate community. This 11th principle calls for monitoring key climate variables with multiple, independent observing systems for measuring the variable, and multiple, independent groups analyzing the data.
Seidel, D J., and John R Lanzante, July 2004: An assessment of three alternatives to linear trends for characterizing global atmospheric temperature changes. Journal of Geophysical Research, 109, D14108, DOI:10.1029/2003JD004414. Abstract
Historical changes in global atmospheric temperature are typically estimated using simple linear trends. This paper considers three alternative simple statistical models, each involving breakpoints (abrupt changes): a flat steps model, in which all changes occur abruptly; a piecewise linear model; and a sloped steps model, incorporating both abrupt changes and slopes during the periods between breakpoints. First- and second-order autoregressive models are used in combination with each of the above. Goodness of fit of the models is evaluated using the Schwarz Bayesian Information Criterion. These models are applied to the instrumental record of global monthly temperature anomalies at the surface and to the radiosonde and satellite records for the troposphere and stratosphere. The alternative models often provide a better fit to the observations than the simple linear model. Typically the two top-performing models have very close values of the Schwarz Bayesian Information Criterion. Usually the two models have the same basic form and the same net temperature change but with a different choice of autoregressive model. However, in some cases the best fits are from two different basic models, yielding different net temperature changes and suggesting different interpretations of the nature of those changes. For the surface data during 1900–2002 the sloped steps and piecewise linear models offer the best fits. Results for tropospheric data suggest that it is reasonable to consider most of the warming during 1958–2001 to have occurred at the time of the abrupt climate regime shift in 1977. Two fundamentally different, but equally valid, descriptions of stratospheric cooling were found: gradual linear change versus more abrupt ratcheting down of temperature concentrated in postvolcanic periods (~2 years after eruption). Because models incorporating abrupt changes can be as explanatory as simple linear trends, we suggest consideration of these alternatives in climate change detection and attribution studies.
Lanzante, John R., Stephen A Klein, and D J Seidel, 2003: Temporal homogenization of monthly radiosonde temperature data. Part I: Methodology. Journal of Climate, 16(2), 224-240. Abstract PDF
Historical changes in instrumentation and recording practices have severely compromised the temporal homogeneity of radiosonde data, a crucial issue for the determination of long-term trends. Methods developed to deal with these homogeneity problems have been applied to a near–globally distributed network of 87 stations using monthly temperature data at mandatory pressure levels, covering the period 1948–97. The homogenization process begins with the identification of artificial discontinuities through visual examination of graphical and textual materials, including temperature time series, transformations of the temperature data, and independent indicators of climate variability, as well as ancillary information such as station history metadata. To ameliorate each problem encountered, a modification was applied in the form of data adjustment or data deletion. A companion paper (Part II) reports on various analyses, particularly trend related, based on the modified data resulting from the method presented here.
Application of the procedures to the 87-station network revealed a number of systematic problems. The effects of the 1957 global 3-h shift of standard observation times (from 0300/1500 to 0000/1200 UTC) are seen at many stations, especially near the surface and in the stratosphere. Temperatures from Australian and former Soviet stations have been plagued by numerous serious problems throughout their history. Some stations, especially Soviet ones up until 1970, show a tendency for episodic drops in temperature that produce spurious downward trends. Stations from Africa and neighboring regions are found to be the most problematic; in some cases even the character of the interannual variability is unreliable. It is also found that temporal variations in observation time can lead to inhomogeneities as serious as the worst instrument-related problems.
Lanzante, John R., Stephen A Klein, and D J Seidel, 2003: Temporal homogenization of monthly radiosonde temperature data. Part II: Trends, Sensitivities, and MSU comparison. Journal of Climate, 16(2), 241-262. Abstract PDF
Trends in radiosonde-based temperatures and lower-tropospheric lapse rates are presented for the time periods 1959–97 and 1979–97, including their vertical, horizontal, and seasonal variations. A novel aspect is that estimates are made globally of the effects of artificial (instrumental or procedural) changes on the derived trends using data homogenization procedures introduced in a companion paper (Part I). Credibility of the data homogenization scheme is established by comparison with independent satellite temperature measurements derived from the microwave sounding unit (MSU) instruments for 1979–97. The various analyses are performed using monthly mean temperatures from a near–globally distributed network of 87 radiosonde stations.
The severity of instrument-related problems, which varies markedly by geographic region, was found, in general, to increase from the lower troposphere to the lower stratosphere, although surface data were found to be as problematic as data from the stratosphere. Except for the surface, there is a tendency for changes in instruments to artificially lower temperature readings with time, so that adjusting the data to account for this results in increased tropospheric warming and decreased stratospheric cooling. Furthermore, the adjustments tend to enhance warming in the upper troposphere more than in the lower troposphere; such sensitivity may have implications for “fingerprint” assessments of climate change. However, the most sensitive part of the vertical profile with regard to its shape was near the surface, particularly at regional scales. In particular, the lower-tropospheric lapse rate was found to be especially sensitive to adjustment as well as spatial sampling. In the lower stratosphere, instrument-related biases were found to artificially inflate latitudinal differences, leading to statistically significantly more cooling in the Tropics than elsewhere. After adjustment there were no significant differences between the latitude zones.
Shine, K P., M S Bourqui, Piers M Forster, S H E Hare, U Langematz, P Braesicke, V Grewe, M Ponater, C Schnadt, C A Smith, J D Haigh, John Austin, Neal Butchart, Drew Shindell, W J Randel, T Nagashima, R W Portmann, S Solomon, D J Seidel, John R Lanzante, Stephen A Klein, V Ramaswamy, and M Daniel Schwarzkopf, 2003: A comparison of model-simulated trends in stratospheric temperatures. Quarterly Journal of the Royal Meteorological Society, 129(590), DOI:10.1256/qj.02.186. Abstract
Estimates of annual-mean stratospheric temperature trends over the past twenty years, from a wide variety of models, are compared both with each other and with the observed cooling seen in trend analyses using radiosonde and satellite observations. The modelled temperature trends are driven by changes in ozone (either imposed from observations or calculated by the model), carbon dioxide and other relatively well-mixed greenhouse gases, and stratospheric water vapour.
The comparison shows that whilst models generally simulate similar patterns in the vertical profile of annual-and global-mean temperature trends, there is a significant divergence in the size of the modelled trends, even when similar trace gas perturbations are imposed. Coupled-chemistry models are in as good agreement as models using imposed observed ozone trends, despite the extra degree of freedom that the coupled models possess.
The modelled annual- and global-mean cooling of the upper stratosphere (near 1 hPa) is dominated by ozone and carbon dioxide changes, and is in reasonable agreement with observations. At about 5 hPa, the mean cooling from the models is systematically greater than that seen in the satellite data; however, for some models, depending on the size of the temperature trend due to stratospheric water vapour changes, the uncertainty estimates of the model and observations just overlap. Near 10 hPa there is good agreement with observations. In the lower stratosphere (20-70 hPa), ozone appears to be the dominant contributor to the observed cooling, although it does not, on its own, seem to explain the entire cooling.
Annual- and zonal-mean temperature trends at 100 hPa and 50 hPa are also examined. At 100 hPa, the modelled cooling due to ozone depletion alone is in reasonable agreement with the observed cooling at all latitudes. At 50 hPa, however, the observed cooling at midlatitudes of the northern hemisphere significantly exceeds the modelled cooling due to ozone depletion alone. There is an indication of a similar effect in high northern latitudes, but the greater variability in both models and observations precludes a firm conclusion.
The discrepancies between modelled and observed temperature trends in the lower stratosphere are reduced if the cooling effects of increased stratospheric water vapour concentration are included, and could be largely removed if certain assumptions were made regarding the size and distribution of the water vapour increase. However, given the uncertainties in the geographical extent of water vapour changes in the lower stratosphere, and the time period over which such changes have been sustained, other reasons for the discrepancy between modelled and observed temperature trends cannot be ruled out.
Alexander, Michael A., I Bladé, Matthew Newman, John R Lanzante, Ngar-Cheung Lau, and J D Scott, 2002: The atmospheric bridge: the influence of ENSO teleconnections on air-sea interaction over the global oceans. Journal of Climate, 15(16), 2205-2231. Abstract PDF
During El Niño-Southern Oscillation (ENSO) events, the atmospheric response to sea surface temperature (SST) anomalies in the equatorial Pacific influences ocean conditions over the remainder of the globe. This connection between ocean basins via the "atmospheric bridge" is reviewed through an examination of previous work augmented by analyses of 50 years of data from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis project and coupled atmospheric general circulation (AGCM)-mixed layer ocean model experiments. Observational and modeling studies have now established a clear link between SST anomalies in the equatorial Pacific with those in the North Pacific, north tropical Atlantic, and Indian Oceans in boreal winter and spring. ENSO-related SST anomalies also appear to be robust in the western North Pacific during summer and in the Indian Ocean during fall. While surface heat fluxes are the key component of the atmospheric bridge driving SST anomalies, Ekman transport also creates SST anomalies in the central North Pacific although the full extent of its impact requires further study. The atmospheric bridge not only influences SSTs on interannual timescales but also affects mixed layer depth (MLD), aalinity, the seasonal evolution of upper-ocean temperatures, and North Pacific SST variability at lower frequencies. The model results indicate that a significant fractionof the dominant pattern of low-frequency (>10 yr) SST variability in the North Pacific is associated with tropical forcing. AGCM experiments suggest that the oceanic feedback on the extratropical response to ENSO is complex, but of modest amplitude. Atmosphere-ocean coupling outside of the tropical Pacific slightly modifies the atmospheric circulation anomalies in the Pacific-North America (PNA) region but these modifications appear to depend on the seasonal cycle and air-sea interactions both within and beyond the North Pacific Ocean.
Bauer, M, A Del Genio, and John R Lanzante, 2002: Observed and simulated temperature-humidity relationships: sensitivity to sampling and analysis. Journal of Climate, 15(2), 203-215. Abstract PDF
Recent studies have demonstrated that the correlation between interannual variations of large-scale average temperature and water vapor is stronger and less height dependent in one GCM than in an objective analysis of radiosonde observations. To address this discrepancy, a GCM with a different approach to cumulus parameterization is used to explore the model dependence of this result, the effect of sampling biases, and the analysis scheme applied to the data.
It is found that the globally complete data from the two GCMs produce similar patterns of correlation despite their fundamentally different moist convection schemes. While this result concurs with earlier studies, it is also shown that this apparent model–observation discrepancy is significantly reduced (although not eliminated) by sampling the GCM in a manner more consistent with the observations, and especially if the objective analysis is not then applied to the sampled data. Furthermore, it is found that spatial averages of the local temperature–humidity correlations are much weaker, and show more height dependence, than correlations of the spatially averaged quantities for both model and observed data. The results of the previous studies are thus inconclusive and cannot therefore be interpreted to mean that GCMs greatly overestimate the water vapor feedback.
Free, M, I Durre, John R Lanzante, Stephen A Klein, and Brian J Soden, et al., 2002: Creating Climate Reference Datasets: CARDS Workshop on Adjusting Radiosonde Temperature Data for Climate Monitoring. Bulletin of the American Meteorological Society, 83(6), 891-899. Abstract PDF
Homogeneous upper-air temperature time series are necessary for climate change detection and attribution. About 20 participants met at the National Climatic Data Center in Asheville, North Carolina on 11-12 October 2000 to discuss methods of adjusting radiosonde data for inhomogeneities arising from instrument and other changes. Representatives of several research groups described their methods for identifying change points and adjusting temperature time series and compared the results of applying these methods to data from 12 radiosonde stations. The limited agreement among these results and the potential impact of these adjustments on upperair trends estimates indicate a need for further work in this area and for greater attention to homogeneity issues in planning future changes in radiosonde observations.
Gaffen, D J., M A Sargent, R E Habermann, and John R Lanzante, 2000: Sensitivity of tropospheric and stratospheric temperature trends to radiosonde data quality. Journal of Climate, 13(10), 1776-1796. Abstract PDF
Radiosonde data have been used, and will likely continue to be used, for the detection of temporal trends in tropospheric and lower-stratospheric temperature. However, the data are primarily operational observations, and it is not clear that they are of sufficient quality for precise monitoring of climate change. This paper explores the sensitivity of upper-air temperature trend estimates to several data quality issues.
Many radiosonde stations do not have even moderately complete records of monthly mean data for the period 1959-95. In a network of 180 stations (the combined Global Climate Observing System Baseline Upper-Air Network and the network developed by J. K. Angell), only 74 stations meet the data availability requirement of at least 85% of nonmissing months of data for tropospheric levels (850-100 hPa). Extending into the lower stratosphere (up to 30 hPa), only 22 stations have data records meeting this requirement for the same period, and the 30-hPa monthly data are generally based on fewer daily observations than at 50 hPa and below. These networks show evidence of statistically significant tropospheric warming, particularly in the Tropics, and stratospheric cooling for the period 1959-95. However, the selection of different station networks can cause network -mean trend values to differ by up to 0.1 K decade-1.
The choice of radiosonde dataset used to estimate trends influences the results. Trends at individual stations and pressure levels differ in two independently produced monthly mean temperature datasets. The differences are generally less than 0.1 K decade-1, but in a few cases they are larger and statistically significant at the 99% confidence level. These cases are due to periods of record when one dataset has a distinct bias with respect to the other. #The statistical method used to estimate linear trends has a small influence on the result. The nonparametric median of pairwise slopes method and the parametric least squares linear regression method tend to yield very similar, but not identical, results with differences generally less than ±0.03 K decade-1 for the period 1959-95. However, in a few instances the differences in stratospheric trends for the period 1970-95 exceed 0.1 K decade-1.
Instrument changes can lead to abrupt changes in the mean, or change-points, in radiosonde temperature data records, which influence trend estimates. Two approaches to removing change-points by adjusting radiosonde temperature data were attempted. One involves purely statistical examination of time series to objectively identify and remove multiple change-points. Methods of this type tend to yield similar results about the existence and timing of the largest change-points, but the magnitude of detected change-points is very sensitive to the particular scheme employed and its implementation. The overwhelming effect of adjusting time series using the purely statistical schemes is to remove the trends, probably because some of the detected change-points are not spurious signals but represent real atmospheric change.
The second approach incorporates station history information to test specific dates of instrument changes as potential change-points, and to adjust time series only if there is agreement in the test results for multiple stations. This approach involved significantly fewer adjustments to the time series, and their effect was to reduce tropospheric warming trends (or enhance tropospheric cooling) during 1959-95 and (in the case of one type of instrument change) enhance stratospheric cooling during 1970-95. The trends based on the adjusted data were often statistically significantly different from the original trends at the 99% confidence level. The intent here was not to correct or improve the existing time series, but to determine the sensitivity of trend estimates to the adjustments. Adjustment for change-points can yield very different time series depending on the scheme used and the manner in which it is implemented, and trend estimates are extremely sensitive to the adjustments. Overall, trends are more sensitive to the treatment of potential change-points than to any of the other radiosonde data quality issued explored.
Lanzante, John R., and G E Gahrs, 2000: The "clear-sky bias" of TOVS upper-tropospheric humidity. Journal of Climate, 130(22), 4034-4041. Abstract PDF
A temporal sampling bias may be introduced due to the inability of a measurement system to produce a valid observation during certain types of situations. In this study the temporal sampling bias in satellite-derived measures of upper-tropospheric humidity (UTH) was examined through the utilization of similar humidity measures derived from radiosonde data. This bias was estimated by imparting the temporal sampling characteristics of the satellite system onto the radiosonde observations. This approach was applied to UTH derived from Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder radiances from the NOAA-10 satellite from the period 1987-91 and from the"Angell" network of 63 radiosonde stations for the same time period. Radiative modeling was used to convert both the satellite and radiosonde data to commensurate measures of UTH.
Examination of the satellite temporal sampling bias focused on the effects of the "clear-sky bias" due to the inability of the satellite system to produce measurements when extensive cloud cover is present. This study indicates that the effects of any such bias are relatively small in the extratropics (about several percent relative humidity) but may be ~5% - 10% in the most convectively active regions in the Tropics. Furthermore, there is a systematic movement and evolution of the bias pattern following the seasonal migration of convection, which reflects the fact that the bias increases as cloud cover increases. The bias is less noticeable for shorter timescales (seasonal values) but becomes more obvious as the averaging time increases (climatological values); it may be that small-scale noise partially obscures the bias for shorter time averages. Based on indirect inference it is speculated that the bias may lead to an underestimate of the magnitude of trends in satellite UTH in the Tropics, particularly in the drier regions.
Dixon, Keith W., and John R Lanzante, 1999: Global mean surface air temperature and North Atlantic overturning in a suite of coupled GCM climate change experiments. Geophysical Research Letters, 26(13), 1885-1888. Abstract PDF
The effects of model initial conditions and the starting time of transient radiative forcings on global mean surface air temperature (SAT) and the North Atlantic thermohaline circulation (THC) are studied in a set of coupled climate GCM experiments. Nine climate change scenario experiments, in which the effective levels of greenhouse gases and tropospheric sulfate aerosols vary in time, are initialized from various points in a long control model run. The time at which the transition from constant to transient radiative forcing takes place is varied in the scenario runs, occurring at points representing either year 1766, 1866 or 1916. The sensitivity of projected 21st century global mean SATs and the THC to the choice of radiative forcing transition point is small, and is similar in magnitude to the variability arising from variations in the coupled GCM's initial three-dimensional state.
Waliser, D E., R-L Shia, John R Lanzante, and Abraham H Oort, 1999: The Hadley circulation: Assessing NCEP/NCAR reanalysis and sparse in-situ estimates. Climate Dynamics, 15(10), 719-735. Abstract PDF
We present a comparison of the zonal mean meridional circulations derived from monthly in-situ data (i.e., radiosondes and ship reports) and from the NCEP/NCAR reanalysis product. To facilitate the interpretation of the results, a third estimate of the mean meridional circulation is produced by subsampling the reanalysis at the locations where radiosonde and surface ship data are available for the in-situ calculation. This third estimate, known as the subsampled estimate, is compared to the complete reanalysis estimate to assess biases in conventional, in situ estimates of the Hadley circulation associated with the sparseness of the data sources (i.e., radiosonde network). The subsampled estimate is also compared to the in-situ estimate to assess the biases introduced into the reanalysis product by the numerical model, initialization process and/or indirect data sources such as satellite retrievals. The comparisons suggest that a number of qualitative differences between the in- situ and reanalysis estimates are mainly associated with the sparse sampling and simplified interpolation schemes associated with in-situ estimates. These differences include: (1) a southern Hadley cell that consistently extends up to 200 hPa in the reanalysis, whereas the bulk of the circulation for the in-situ and subsampled estimates tends to be confined to the lower half of the troposphere, (2) more well-defined and consistent poleward limits of the Hadley cells in the reanalysis compared to the in-situ and subsampled estimates, and (3) considerably less variability in magnitude and latitudinal extent of the Ferrel cells and southern polar cell exhibited in the reanalysis estimate compared to the in-situ and subsampled estimates. Quantitative comparison shows that the subsampled estimate, relative to the reanalysis estimate, produces a stronger northern Hadley cell (~20%), a weaker southern Hadley Cell (~20-60%), and weaker Ferrel cells in both hemispheres. These differences stem from poorly measured oceanic regions which necessitate significant interpolation over broad regions. Moreover, they help to pinpoint specific shortcomings in the present and previous in-situ estimates of the Hadley circulation. Comparions between the subsampled and in situ estimates suggest that the subsampled estimate produces a slightly stronger Hadley circulation in both hemispheres, with the relative differences in some seasons as large as 20-30%. These differences suggest that the mean meridional circulation associated with the NCEP/NCAR reanalysis is more energetic than observations suggest. Examination of ENSO-related changes to the Hadley circulation suggest that the in-situ and subsampled estimates significantly overestimate the effects of ENSO on the Hadley circulation due to the reliance on sparsely distributed data. While all three estimates capture the large-scale region of low-level equatorial convergence near the dateline that occurs during El Niño, the in-situ and subsampled estimates fail to effectively reproduce the large-scale areas of equatorial mass divergence to the west and east of this convergence area, leading to an overestimate of the effects of ENSO on the zonal mean circulation.
Lanzante, John R., 1998: Correction to Resistant, robust & nonparametric techniques for the analysis of climate data: Theory and examples, including applications to historical radiosonde station data. International Journal of Climatology, 18(2), 235.
Lanzante, John R., and G E Gahrs, 1997: Examination of some biases in satellite and radiosonde measures of upper tropospheric humidity using a framework for the comparison of redundant measurement systems In Proceedings of the Twenty-First Annual Climate Diagnostics and Prediction Workshop, Springfield, VA, NTIS, 352-355.
Lanzante, John R., 1996: Lag relationships involving tropical sea surface temperatures. Journal of Climate, 9(10), 2568-2578. Abstract PDF
A long historical record (~100 years) of monthly sea surface temperature anomalies from the Comprehensive Ocean-Atmosphere Data Set was used to examine the lag relationships between different locations in the global Tropics. Application of complex principal component (CPC) analysis revealed that the leading mode captures ENSO-related quasi-cyclical warming and cooling in the tropical Pacific Ocean. The dominant features of this mode indicate that SST anomalies in the eastern Pacific lead those of the central Pacific. However, a somewhat weaker aspect of this mode also indicates that SST anomalies in the tropical Indian and western tropical North Atlantic Oceans vary roughly in concert with each other but lag behind those in the central and eastern Pacific. The stability of these lag relationships is indicated by the fact that the leading mode is quite similar in three different 30-year time periods.
In order to further examine these relationships some simple indexes were formed as the average over several grid points in each of the four key areas suggested by the CPC analyses. Several different types of analyses including lag correlation, checking the correspondence between extrema, and visual examination of time series plots were used to confirm the relationships implied by the CPC spatial patterns. By aggregating the lag correlations over the three 30-year time periods and performing a Monte Carlo simulation the relationships were found to be statistically significant at the 1% level. Reasonable agreement in the pattern of lag correlations was found using a different SST dataset.
Without aggregation of the lag correlations (i.e., considering each 30-year period separately) the areas in the Pacific and Indian were consistently well related, but those involving the North Atlantic were more variable. The weaker correlations involving the Atlantic Ocean underscore the more tenuous nature of this remote relationship. While major ENSO-related swings in tropical Pacific SST are often followed by like variations in a portion of the Atlantic, there are times when there is either no obvious association or one of opposite sign. It may be that while ENSO variability tends to have an impact in the Atlantic, more localized factors can override this tendency. This may explain some of the contradictory statements found in the literature regarding such remote associations.
In comparing the findings of this project with some studies that utilize very recent data (since about 1982) some discrepancies were noted. In particular, some studies have reported evidence of 1) an inverse relationship between SST anomalies in the tropical Pacific and those in the eastern tropical South Atlantic and 2) the appearance of ENSO-related SST anomalies in the central tropical Pacific prior to those in the eastern tropical Pacific. From a historical perspective both of these characteristics are unusual. Thus, the recent time period may merit special attention. However, it is important to stress that caution should be exercised in generalizing findings based only on this recent time period.
Lanzante, John R., 1996: Resistant, robust & non-parametric techniques for the analysis of climate data: Theory and examples, including applications to historical radiosonde station data. International Journal of Climatology, 16(11), 1197-1226. Abstract PDF
Basic traditional parametric statistical techniques are used widely in climatic studies for characterizing the level (central tendency) and variability of variables, assessing linear relationships (including trends), detection of climate change, quality control and assessment, identification of extreme events, etc. These techniques may involve estimation of parameters such as the mean ( a measure of location), variance (a measure of scale) and correlation/regression coefficients (measures of linear association); in addition, it is often desirable to estimate the statistical significance of the difference between estimates of the mean from two different samples as well as the significance of estimated measures of association. The validity of these estimates is based on underlying assumptions that sometimes are not met by real climate data. Two of these assumptions are addressed here: normality and homogeneity (and as a special case statistical stationarity); in particular, contamination from a relatively few 'outlying values' may greatly distort the estimates. Sometimes these common techniques are used in order to identify outliers; ironically they may fail because of the presence of the outliers!
Alternative techniques drawn from the fields of resistant, robust and non-parametric statistics are usually much less affected by the presence of 'outliers' and other forms of non-normality. Some of the theoretical basis for the alternative techniques is presented as motivation for their use and to provide quantitative measures for their performance as compared with the traditional techniques that they may replace. Although this work is by no means exhaustive, typically a couple of suitable alternatives are presented for each of the common statistical quantities/tests mentioned above. All of the technical details needed to apply these techniques are presented in an extensive appendix.
With regard to the issue of homogeneity of the climate record, a powerful non-parametric technique is introduced for the objective identification of 'change-points' (discontinuities) in the mean. These may arise either naturally (abrupt climate change) or as the result of errors or changes in instruments, recording practices, data transmission, processing, etc. The change-point test is able to identify multiple discontinuities and requires no 'metadata' or comparison with neighbouring stations; these are important considerations because instrumental changes are not always documented and, particularly with regard to radiosonde observations, suitable neighbouring stations for 'buddy checks' may not exist. However, when such auxiliary information is available it may be used as independent confirmation of the artificial nature of the discontinuities.
The application and practical advantages of these alternative techniques are demonstrated using primarily actual radiosonde station data and in a few cases using some simulated (artificial) data as well. The ease with which suitable examples were obtained from the radiosonde archive begs for serious consideration of these techniques in the analysis of climate data.
Soden, Brian J., and John R Lanzante, 1996: An assessment of satellite and radiosonde climatologies of upper-tropospheric water vapor. Journal of Climate, 9(6), 1235-1250. Abstract PDF
This study compares radiosonde and satellite climatologies of upper-tropospheric water vapor for the period 1979-1991. Comparison of the two climatologies reveals significant differences in the regional distribution of upper-tropospheric relative humidity. These discrepancies exhibit a distinct geopolitical dependence that is demonstrated to result from international differences in radiosonde instrumentation. Specifically, radiosondes equipped with goldbeater's skin humidity sensors (found primarily in the former Soviet Union, China, and eastern Europe) report a systematically moister upper troposphere relative to the satellite observations, whereas radiosondes equipped with capacitive or carbon hygristor sensors (found at most other locations) report a systematically drier upper troposphere. The bias between humidity sensors is roughly 15%-20% in terms of the relative humidity, being slightly greater during summer than during winter and greater in the upper troposphere than in the midtroposphere. However, once the instrumentation bias is accounted for, regional variations of satellite and radiosonde upper-tropospheric relative humidity are shown to be in good agreement. Additionally, temporal variations in radiosonde upper-tropospheric humidity agree reasonably well with the satellite observations and exhibit much less dependence upon instrumentation.
The impact that the limited spatial coverage of the radiosonde network has upon the moisture climatology is also examined and found to introduce systematic errors of 10%-20% relative humidity over data-sparse regions of the Tropics. It is further suggested that the present radiosonde network lacks sufficient coverage in the eastern tropical Pacific to adequately capture ENSO-related variations in upper-tropospheric moisture. Finally, we investigate the impact of the clear-sky sampling restriction upon the satellite moisture climatology. Comparison of clear-sky and total-sky radiosonde observations suggests the clear-sky sampling limitation introduces a modest dry bias (<10% relative humidity) in the satellite climatology.
Lanzante, John R., 1995: Analysis of climate data using resistant, robust and nonparametric techniques: Some examples and some applications to the historical radiosonde record In Proceedings of the 19th Annual Climate Diagnostics Workshop, Springfield, VA, NTIS, 37-40.
Lanzante, John R., 1993: Circulation response in GFDL increased CO2 experiments and comparison with observed data In Proceedings of the 17th Annual Climate Diagnostics Workshop, Springfield, VA, NTIS, 248-253.
Lanzante, John R., 1992: A comparison of the stationary wave responses in several GFDL increased CO2 GCM experiments In Proceedings of the Sixteenth Annual Climate Diagnostics Workshop, U. S. Dept. of Commerce/NOAA/NWS, 241-246.
Lanzante, John R., 1991: Time scales of ENSO variability: A COADS/coupled model comparison In Proceedings of the Fifteenth Annual Climate Diagnostics Workshop, Springfield, VA, NTIS, 42-47.