Weather Forecasting ... On-Line

Introduction to Verification


Introductory Comments

If you routinely prepare weather forecasts, you are likely interested in how well your forecast does compared to the actual weather that occurs. Although this comparison seems relatively easy to do, it can be more complex than it appears. The purpose of this web page is to explore the meaning of verification and examine reasons why verification is important. The companion web page, Verification Measures, looks at a variety of ways to quantitatively assess the "goodness" of a forecast.

What is Verification?

The Compendium of Meteorology (American Meteorological Society) defines verification as follows:

"the entire process of comparing the predicted weather with the actual weather,
utilizing the data so obtained to produce one or more indices or scores and
then interpreting these scores by comparing them with some standard depending
upon the purpose to be served by the verification."

This definition points out several aspects of verification that need to be highlighted:

  • Verification is a comparison of the forecast weather with the actual weather that occurs. In most cases this comparison is straight-forward. For example, if you are forecasting the maximum temperature for the daytime period, it is easy to observe that value and compare it to the forecast value. However, if you desire to see how well your winter storm warning verified, a more complex comparison is in order that includes the definition of "winter storm warning" and collection of winter weather data over the area covered by the warning.
  • Verification uses an index or score to assess the "goodness" of the comparison. These indices or scores come in a variety of forms and in some cases, more than one index or score may be needed to fully assess the comparison. These indices or scores are discussed in some detail in the companion web page, Verification Measures.
  • Verification compares the index or score to some standard. This standard represents an expected or minumum "level of goodness" that forecasts should attain. If forecasts do not meet this standard, some change is needed to improve the forecast process.
  • The terms evaluation, performance measure, and metric are frequently used in lieu of the term verification. Similarly, verification is usually thought of as a numerical comparison or score, but often verification can take the form of a more qualitative comparison.

    Why is Verification Important?

    In the early 1980s the National Weather Service examined the verification process and developed a list of reasons why a verification system is needed. These reasons include:

  • A verification program is used to establish a baseline of skill or accuracy which can be used to measure subsequent changes in forecast procedures or new prodcuts;
  • A verification program provides managers with information that can be used in making decisions about organizational structure, training, research and development, and forecaster ratings.
  • A verification program provides quality control of forecast products.
  • A verification program provides a means to assess the economic value of forecasts.
  • A verification program should have several goals:

  • To verify the products that go out to your users. This is important. You need to verify the forecast that the user sees, not some intermediate step or internal product.
  • To verify the phenomena that is forecast. Many forecasts include multiple weather elements. Some are easier to verify than others. All elements included in the forecast should be verified.
  • To minimize the "workload" associated with the verification process. Collection and processing of forecast and verification data are a time-consuming task. To be done efficiently, the process must be done by computer and statistics (indices or scores) produced on a regular schedule.
  • To provide rapid feedback to the forecaster. Forecasters need to know fairly quickly how well they have done, particularly for significant weather situations.
  • To be fair or perceived to be fair by forecasters. A verification system must not be biased toward any particular weather event or forecasting method. Forecasters must feel that they can do as well or better than any other forecaster. Some argue that verification scores should not be used to rate forecasters due to the variability of the weather.
  • What Should You Verify?

    You need to verify what you forecast! The question is: what did you forecast? Forecasts can be divided into four basic types:

  • Explicit Values: Some prognostications contain explicit values. For example, maximum and minimum temperatures are explicit. There is no question what is forecast and value comparisons with observations are fairly straight-forward.
  • Probability Forecasts: Precipitation forecasts are often expressed in terms of the chance for rain or snow. For example, the forecast might call for a 40 percent chance for rain. How do you verifiy these types of forecasts?
  • Areal Coverage Forecasts: Forecasts of showers and thunderstorms are can be given in terms of areal coverage such as isolated, widely scattered, scattered, or numerous. These terms require a different approach to verification than explicit or probability forecasts.
  • Descriptive terms: Lastly, how do you verify forecasts that use terms such as "hot" or "chilly" or similar phrases?
  • To verify any of the above types of forecast, you need two things:

    1. definitions for the terminology used in the forecasts
    2. time frames over which the forecasts are valid

    Similarly, you need to decide how to treat ranges of forecast values, and whether you are forecasting for a point or for an area.

    Once your verification system answers all questions, you need to collect the forecasts and their accompanying verifying data. This gives you a set of forecast-observation pairs that are the basis for the statistical comparison described in the companion web page, Verification Measures.

    Concluding Remarks

    Verification is often overlooked in a forecast office. It should not be. A forecaster needs to know how well he/she does. It aids the learning process and enhances the forecast experience of the forecaster. It helps a forecaster learn what forecast method works in a particular situation and where improvement is needed.


    Return to the Operational Weather Topics page

    References

  • National Verification Task Team, 1982: National Verification Plan. National Weather Service, Silver Spring, MD, 81 pp.
  • Brier, G.W., and R.A. Allen, 1951: Verification of weather forecasts. In: T.F. Malone, ed., Compendium of Meteorology, American Meteorological Society, Boston, 841-848.

  • last updated on 2/27/10