2006 PSD Seminars
Evaluating climate model simulations of clouds, radiation, and precipitation
ESRL/PSD and CIRES, University of Colorado
Global climate models are the main tool used to project future climate, and many such models exist, both among the various modeling centers and over time, as each model is developed. Predictions of future climate change are not (yet) verifiable, so model skill must be evaluated by comparing simulations of present-day climate with observations. But how is this comparison to be made? The climate modeling community has no agreed-upon measures of skill, so models are evaluated with a constantly-changing set of criteria. Furthermore, climate models are often assessed by examining both forced modes (i.e. the annual, seasonal, or diurnal cycles) and modes that represent internal variability (i.e. El Nino or the Madden-Julian oscillation). By contrast, the weather forecasting community has a common, well-defined set of metrics for forecasts. Metrics are low-order measures of skill; they indicate model performance (and may not help isolate the root of any model flaws). The metrics useful for short-term forecasts, however, are very useful for climate simulations.
In this talk I'll present a short hierarchy of metrics for evaluating the simulation of clouds, radiation, and precipitation in the present-day climate. There are three inter-related components:
The metrics are simple to compute from low-resolution data (i.e. monthly mean fields of single-level quantities) so models (or versions of models) may be easily compared. I'll show the metrics for all runs submitted to the IPCC database for the Fourth Assessment as well as two potential ringers, and demonstrate an extension to the diagrams developed by Karl Taylor to summarize three statistical quantities for each model at one time.
Wednesday 18 October, 2006
2:00 PM (Refreshments at 1:50 pm)
DSRC Multipurpose Room (GC402)