A survey of ensemble forecast calibration techniques using reforecast data

Tom Hamill
CDC

blue_bar.gif (1106 bytes)

Abstract

Despite tremendous progress in operational numerical weather prediction in the past decades, numerical forecasts uncontaminated by systematic errors have not been achieved. Forecast users may thus benefit from a statistical adjustment of the forecast to counteract model bias, correct for overconfidence in ensemble forecasts, or to downscale the coarse-resolution forecast to a sub-area of interest.

At the Climate Diagnostics Center, we have developed a 25-year reforecast dataset using a 1998 version of the NCEP MRF model. A 15-member ensemble is run to 15 days lead for every day since 1979, and a real-time forecast is produced with the same model version. With a training data set that is consistent with the operational forecast model, the model systematic errors can be evaluated and corrected much more carefully than is possible with small training data sets. Additionally, this long training data set permits us to answer questions such as (1) how long of a training data set is needed to statistically adjust the forecasts, and does this vary with the rarity of the weather phenomenon? (2) Is one method of statistical adjustment of forecasts superior to another? (3) Are more complex statistical adjustment techniques superior to simpler ones? We answer these questions and we demonstrate one statistical analog technique that is shown to produce well calibrated, skillful, downscaled probabilistic QPF forecasts that are more skillful than operational NCEP ensemble forecasts.

blue_bar.gif (1106 bytes)

6 April, 2005
2 PM/ DSRC 1D 403
Back to PSL seminar list.