Oct 22-24 GSD Probabilistic Forecasting Workshop

Agenda


Workshop on AWIPS Tools for Probabilistic Forecasting
Hosted by ESRL/Global Systems Division
Boulder, CO
22-24 October, 2008
    For a detailed final review of the Workshop's "Findings and Outcomes", click here.
    Click here for a complete review of feedback and other information provided by the Workshop participants.
    See below for presentations on each topic and discussion, where available.
Wednesday
  8:30 AM    Introductions and opening remarks (Steve Koch) - Presentation
  8:45 AM    Goals of the workshop (Paul Schultz) - Presentation
  9:00 AM    Presentations on the big picture
                     9:00 AM Doug Hilderbrand - planning at HQ - Presentation
                     9:30 AM Dave Novak - review of others' methods - Presentation
10:00 AM   Presentations by forecasters
                    10:00 AM Rich Grumm - Presentation
                    10:20 AM Steve Amburn - a PQPF experiment at Tulsa - Presentation
                    10:40 AM Larry Dunn - lessons learned in WR - Presentation
11:00 AM   Strawman Forecast Process and prototype tools/products (Tom LeFebvre) - Presentation
11:30 AM   Discussion - Brainstorming the forecast process
12:00 AM   Lunch
  1:00 PM   Hands on Session #1
                    D2D Prototype Capabilities (Jim Ramer) - Cheat Sheet (Short); Cheat Sheet (Extended)
                    GFE Prototype Capabilities (Tom LeFebvre, Tracy Hansen)
                    Cheat Sheets:
                    Hist Sampler
                    Ensemble Weights
                    Edit Histogram
                    CalcEnsembleStats
                    Text Products
                    3D Visualization Capability (Ashvin Mysore) - Cheat Sheet
                    D2D Capabilities with Forecaster Weights (Jim Ramer, Tom LeFebvre) - See above Cheat Sheets
  2:45 PM   Break
  3:00 PM   Centrally-produced guidance
                    3:00 PM Matt Peroutka - EKDMOS - Presentation #1, Presentation #2; EKDMOS website
                    3:20 PM Tom Hamill - reforecast methods - Presentation
                    3:40 PM Mike Charles - NCEP products
                    4:00 PM Scott Jacobs - NAWIPS
  4:15 PM   Discussion - Guidance and the Forecast Process
Dinner and social time at the Lazy Dog
Thursday
  8:30 AM   Consideration of customer needs, products, and dissemination; EMP3 demo - Presentation
  9:00 AM   Discussion - The emphasis of this workshop is forecast preparation, but what do we need to know about customer needs that might impact the forecast process? Do we need an intermediary to translate the forecasts to the public?
  9:30 AM   Hands on Session #2 - Forecasters work with an archived case to identify outliers and to sharpen the PDF using the prototype capabilities
10:30 AM   Break
10:45 AM   Forecaster presentations - hurricane emphasis
                    10:45 AM Pablo Santos - Presentation
                    11:05 AM John Knaff
11:30 AM   Discussion - Forecast Process revisited - alternatives to NWP
12:00 PM   Lunch
  1:30 PM   Hands on Session #3 - More exploration of workstation capabilities and deficiencies
  3:00 PM   Break
  3:15 PM   Discussion - Ensemble data: What is needed? What are the appropriate subsets of "everything"? What about reforecasting?
Friday
  8:30 AM   Path to Operations - Feedback on prototype tools and products
  9:00 AM   Discussion - Severe weather services: Coordinating WFO warnings with SPC outlooks, coordinating severe weather forecasts and state-variable forecasts
10:00 AM   Break
10:15 AM   Discussion - Summarize findings and identify next steps
12:00 PM   Workshop Ends

Accommodations

Boulder Outlook Hotel: (800) 542-0304; (303) 443-3322 -- Note there is live music here every night!
Boulder Broker Hotel: (303) 444-3330
Homewood Suites in Boulder:
4950 Baseline Rd., Boulder, Colorado, United States 80303
Tel: 1-303-499-9922
Fax: 1-303-499-6706

Attendees (Tentative - updated Oct 8, 2008)

    Doug Hilderbrand, OST
    Steve Schotz, OST
    Scott Jacobs, NCO
    Matt Peroutka, MDL
    Pablo Santos, WFO Miami
    Steve Amburn, WFO Tulsa
    Rich Grumm, WFO State College
    Larry Dunn, WFO Salt Lake
    Dave Metze, WFO Pueblo
    Michael Hudson, NWS CR
    Andrea Schumacher, CIRA


Background

Expressing the uncertainty in our weather forecasts is the right thing to do. We know that intuitively as scientists. We don't even know exactly what the weather is right now, how can we possibly know what it is in the future? Obviously we can't, but you'd never know it looking at our forecast services. The high will be 57 degrees tomorrow!

Somebody in NWS Hq decided to commission the National Research Council to produce the report "Completing the Forecast". If anybody knows who that was I'd be happy to acknowledge him or her, it was a visionary move. That report identified the societal requirements and the scientific opportunity before us. Maybe that same person appointed the NFUSE committee. In any case, one important NFUSE activity was to conduct a survey of NWS forecasters on the subject of probabilistic forecasting. Where are we now? What forecast products and services could be enhanced with appropriate expressions of uncertainty? What does AWIPS or NAWIPS have to do to help forecasters do their job? How will AWIPS II need to change to enable forecasters to meet the challenge of producing the full range of probabilistic forecasting services? Combined, the NRC report and the survey results form the basis of the requirements now under OSIP review.

The Earth System Research Laboratory in Boulder has a collection of scientists in position to support the NWS in this transition. Upstairs is the crew that contributed heavily to the development of key AWIPS components, the display for 2D data (D2D) and the Graphical Forecast Editor (GFE). Down the hall are the guys who built a GFS-based reforecast system that produces objective estimates of certain probabilities and implemented it at NCEP. This place is full of NWP experts. When the NRC report came out, and the NWS formed the NFUSE committee, a group of us in the Global Systems Division decided to conduct a small pilot project in prototyping workstation tools to support probabilistic forecasting. That was about a year ago.

Today we're addressing only the workstation part; it's a fraction of everything that has to happen. NDFD has to expand to include gridded information on uncertainty. MDL and EMC have to produce guidance for probabilistic forecasts. It goes on and on, but the big picture is NFUSE's problem. We are here specifically to think through the forecast process in an era of probabilistic forecast expression. Those of us conducting this workshop require your thoughtful analysis of scientific realities, the task at hand, and the logistics of production. We developers will present to you our concept of the solution system, we will give you a progress report on our toolkit development (in the form of demonstrations and exercises), and then we're going to ask for your guidance on how we should proceed. Is our conceptualization of the forecast process on the right track? What software components do we need to trash, or improve, or invent?

The Forecast Process

We don't really know how probabilistic forecasting should be accomplished in practice, so we're going to make our best guess and take it from there. Let's start by assuming it will be modeled after current practices. Today, forecasters look at surveillance products, they look at a variety of guidance products, (in WFOs) they load GFE with first guesses from whatever gridded estimates they like best, and they modify those estimates according to known biases in the guidance they're using, or information from predictors not included in the first guess, or by intuition.

As a result of other workshops on uncertainty expression there seems to have emerged a weak consensus that, initially, we can provide much of the potential value of probabilistic forecast expression by simply adding two values to today's deterministic expression: the value representing a threshold below which there is a 10% chance of occurrence, and the value representing a threshold above which there is a 90% chance of occurrence. The current deterministic quantity represents the median, or the value that represents equal chance that the verification will come in above or below; e.g., there is an 80% chance that the high temperature for tomorrow will be between 53F and 59F; our best estimate is 57F. NDFD would need to add two additional fields to complete the temperature forecast.

Given this, NWS Hq will be required to provide you with forecast guidance on those 10% and 90% values in addition to the deterministic fields they already produce. That work is underway. TDL is working on gridded MOS, and NCEP is focusing on post-processing of ensemble NWP of at least two varieties. There is a lively and collegial competition among these people, which is sure to lead to high-quality products. Early evidence suggests that for many days of the year forecasters will be able to simply review the guidance and "push the button". That is not just an attractive design goal, we believe it is imperative to enable forecasters to dig into high-impact situations, including case research and other exceptional measures. This cannot happen unless they are freed of many day-to-day duties that currently occupy much of their time. Preparing for high-impact weather is an activity to be done during fair weather.

Brian Colle of SUNY/Stonybrook sent around an email that describes how he used a WRF-based ensemble and subjective weighting and blending to sharpen probabilistic forecasts of weather associated with Hurricane Hanna. It is an illustration of how the process may work. He knows that certain PBL parameterizations sometimes position moisture convergence maxima in the wrong location in onshore flow. He knows that certain convective parameterizations tend to produce precipitation too far upstream. Based on this he chooses ensemble members to ignore or emphasize in blending.

This is the approach we're starting with. The SREF ensemble is an ideal starting point for this, because it contains members with a few different parameterizations for the essential processes. (The GEFS uses a single model initialized with multiple initial conditions, which is a different and valid way to diversify an ensemble, but precludes the possibility of identifying biases.)