HWT/DTC 2011 Evaluation - Objective Evaluation

The Experimental Forecast Program (EFP) component of the NOAA Hazardous Weather Testbed (HWT) has conducted Spring Experiments since 2000. The main focus of recent Spring Experiments has been to gain an understanding of how to better use the output of near-cloud resolving configurations of numerical weather prediction (NWP) models to predict convective storms.  The primary organizers of the HWT-EFP are the National Severe Storms Laboratory (NSSL) and the Storm Prediction Center (SPC).  The experiences of the HWT-EFP participants have shown that the high resolution convective storm predictions are at times difficult for operational forecasters to reconcile, in part because many solutions appear to be plausible for a given mesoscale environment. Subjective evaluation has the potential to be a comparative benchmark for assessing cutting-edge verification techniques designed for high resolution convection-allowing models; these evaluations have had a significant, positive impact on model development strategies.  The 2011 HWT Spring Experiment page may be found at: http://hwt.nssl.noaa.gov/Spring_2011

The Model Evaluation Tools (MET), developed by the Developmental Testbed Center (DTC), will be used during the 2011 Spring Experiment to objectively evaluate the models' performance.  Three important goals of these evaluations have been (i) to provide objective evaluations of the experimental forecasts, ii) to supplement and compare to subjective assessments of performance; and (iii) to expose the forecasters and researchers to both new and traditional approaches for evaluating precipitation forecasts.

MET provides a variety of statistical tools for evaluating model-based forecasts using both gridded and point observations. Model forecasts of accumulated precipitation and reflectivity were evaluated using the Grid-Stat and MODE tools within MET.  Grid-Stat applies traditional verification methods for gridded datasets. These methods include verification metrics such as the Equitable Threat Score (ETS), frequency bias, Critical Success Index (CSI), and a host of other statistics.  Statistics for neighborhood methods, such as Fractional Skill Score (FSS), are also computed in Grid-Stat. MODE, the Method for Object-Based Diagnostic Evaluation, provides an object-based verification of gridded forecasts by identifying and matching "objects" (i.e., areas of interest) in the forecast and observed fields and comparing the attributes of the forecast/observation object pairs. 

In 2011, the DTC is focusing on probabilistic predictions, with an emphasis on extreme precipitation events and strong convection as it relates to convective initiation. The evaluation will include all members of the Center for Analysis and Prediction of Storms (CAPS) Storm Scale Ensemble Forecast (SSEF) system, for select variables, as well as ensemble products selected by SPC. Operational (or near-operational) deterministic and probabilistic models will be used as a baseline for comparison.