Main Article Content
Probabilistic vs deterministic forecasts – interpreting skill statistics for the benefit of users
Abstract
Owing to probabilistic uncertainties associated with seasonal forecasts, especially over areas such as southern Africa where forecast skill is limited, non-climatologists and users of such forecasts frequently prefer them to be presented or distributed in terms of the likelihood (expressed as a probability) of certain categories occurring or thresholds being exceeded. Probabilistic forecast verification is needed to verify such forecasts. Whilst the resulting verification statistics can provide clear insights into forecast attributes, they are often difficult to understand, which might hinder forecast uptake and use. This problem can be addressed by issuing forecasts with some understandable evidence of skill, with the purpose of reflecting how similar forecasts may have performed in the past. In this paper, we present a range of different probabilistic forecast verification scores, and determine if these statistics can be readily compared to more commonly known and understood ‘ordinary’ correlations between forecasts and their associated observations – assuming that ordinary correlations are more intuitively understood and informative to seasonal forecast users. Of the range of scores considered, the relative operating characteristics (ROC) was found to be the most intrinsically similar to correlation.