Hit rate and False alarm rate

Verification measures like the RMSE and the ACC will value equally the case of an event being forecast, but not observed, as an event being observed but not forecast. But in real life the failure to forecast a storm that occurred will normally have more dramatic consequences than forecasting a storm that did not occur. To assess the forecast skill under these conditions another type of verifications must be used.

For any threshold (like frost/no frost, rain/dry or gale/no gale) the forecast is simplified to a yes/no statement (categorical forecast). The observation itself is put in one of two categories (event observed/not observed). Let H denote "hits", i.e. all correct yes-forecasts - the event is predicted to occur and it does occur, F false alarms, i.e. all incorrect yes-forecasts, M missed forecasts (all incorrect no-forecasts that the event would not occur) and Z all correct no-forecasts. Assume altogether N forecasts of this type with H+F+M+W=N. A perfect forecast sample is when F and M are zero. A large number of verification scores13 are computed from these four values.

A forecast/verification table

forecast/obs

observed

not obs

forecast

H

F

not forecast

M

Z

The frequency bias BIAS=(H+F)/(H+M), ratio of the yes forecast frequency to the yes observation frequency.

The proportion of correct PC=(H+Z)/N, gives the fraction of all the forecasts that were correct. Usually it is very misleading because it credits correct "yes" and "no" forecasts equally and it is strongly influenced by the more common category (typically the "no" event).

The probability of detection POD=H/(H+M), also known as Hit Rate (HR), measures the fraction of observed events that were correctly forecast.

The false alarm ratio FAR=F/(H+F), gives the fraction of forecast events that were observed to be non events.

The probability of false detection POFD=F/(Z+F), also known as the false alarm rate, measures the fraction of false alarms given the event did not occur. POFD is generally associated with the evaluation of probabilistic forecast by combining it with POD into the Relative Operating Characteristic diagram (ROC)

A very simple measure of success of categorical forecasts is the difference POD-POFD which is known as the Hansen-Kuiper or True Skill Score. Among other properties, it can be easily generalised for the verification of probabilistic forecast (see 7.4 below).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值