ROC Curves

ROC Curves

false acceptance

  • define :In biometrics, the instance of a security system incorrectly verifying or identifying an unauthorized(非法的) person.
  • A false acceptance typically is considered the most serious of biometric security errors as it gives unauthorized users access to systems that expressly are trying to keep them out.
  • A system's FAR typically is stated as the ratio of the number of false acceptances divided by the number of identification attempts.

FAR && FRR

FAR(False Accept Rate): is calculated as a fraction of impostor(假) numbers exceeding your threshold.
FRR(False Reject Rate): is calculated as a fraction of genuine(真) numbers falling bellow your threshold.

Example:
We have a fingerprint system. In order to evaluate a performance of any biometric system, we need to gather a database. Assume, that we have done that and the database consists of 10 legitimate users (USER_1-USER_10) and each user provided his finger 10 times (10x10 = 100 images in total). Lets assume, that single image is sufficient for template creation. You select a user (e. g. USER_1) and one his fingerprint image and create the template. The rest of his images you use to verify the fingerprint and you receive 9 genuine scores. All images of other users you use as a impostors and you receive 90 impostor scores. You repeat the template generation for all images and all users and in total you receive 900 genuine scores and 9000 impostor scores. These scores are usually used to generate so called ROC curves to choose the best threshold suting your problem. If you have chosen a threshold, you can calculate the FAR and FRR using the definition I stated above
Lets assume, that we have chosen as threshold 0.7 and 100 impostor scores exceed the threshold and 50 genuine scores fall below the threshold.
在这里插入图片描述
在这里插入图片描述
So FAR = 1.1% and FRR = 5.6%.

ROC Curves

Measuring false rejection and acceptance rates for a given threshold condition provides an incomplete description of system performance. A general description can be obtained by varying the threshold over a sufficiently large range and tabulating(制表) the resulting false rejection and acceptance rates. A tabulation of this kind can be summarized in a receiver operating characteristic (ROC) curve, first used in psychophysics. The ROC curve is obtained by assigning the probability of correct acceptance (1 − false rejection rate [FRR]) and the probability of incorrect acceptance (false acceptance rate [FAR]) to the vertical and horizontal axes, respectively, and varying the decision threshold, as shown in Figure 7.3 [3].

在这里插入图片描述
Figure 7.3. Receiver operating characteristic (ROC) curves; performance examples of three speaker recognition systems: A, B, and C.

EER

the EER is a commonly accepted overall measure of system performance. It corresponds to the threshold at which the false acceptance rate is equal to the false rejection rate. The EER point corresponds to the intersection(交点) of the ROC curve with the straight line of 45 degrees, indicated in Figure 7.3.
equal error rate (EER) ,If we plot out both the FAR and FRR on a graph, as we have done in Figure 2.3, the EER is the point where the two lines intersect. EER is sometimes used as a measure of the accuracy of biometric systems.
在这里插入图片描述
Figure 2.3. Equal error rate.

Securing Biometric Devices

The following list explains the various rates that should be tested whenever you use biometric devices.

  • FAR = the percent of unauthorized users incorrectly matched to a valid user’s biometric
  • FRR = the percent of incorrectly rejected valid users
  • CER = the error rate at which FAR equals FRR
  • FTA = the failure to acquire rate
  • FTE = the failure to enroll rate

the Cross-over Error Rates (CER),Failure to Enroll (FTE) rate and the Failure to Acquire (FTA) rate.

  • FTE denotes the amount of people who are not able to use the system due to some sort of incompatibility
  • FTA denotes the number of users who are not able to render an acceptable enrollment image to use the device.

TPR和FPR

TPR(True Positive Rate)和FPR(False Positive Rate),TPR(True Positive Rate)和FPR(False Positive Rate)是二分类算法常用的评价指标,分别是真正例率和假正例率。他们都是基于混淆矩阵的度量标准

n=192predicted:0predicted:1
actual:011812
actual:14715

在这里插入图片描述
TPR (True Positive Rate)为真正例率,也成召回率和灵敏性: 正确识别的正例数据在实际正例数据中的百分比。
T P R = T P T P + F N TPR=\frac{TP} {TP + FN} TPR=TP+FNTP
FPR (True Positive Rate) 为假正例率: 实际值是负例数据,预测错误的百分比
F P R = F P T N + F P FPR=\frac{FP} {TN + FP} FPR=TN+FPFP
基于混淆矩阵的分类器的性能度量还有一些别的指标,具体请参考混淆矩阵的分类器

混淆矩阵:
针对预测值和真实值之间的关系,我们可以将样本分为四个部分,分别是:
真正例(True Positive,TP):预测值和真实值都为1
假正例(False Positive,FP):预测值为1,真实值为0
真负例(True Negative,TN):预测值与真实值都为0
假负例(False Negative,FN):预测值为0,真实值为1
在这里插入图片描述
(图片引自《machine learning:A Probabilistic Perspective》
ROC曲线:
通过混淆矩阵,我们可以得到真正例率(True Positive Rate , TPR):
在这里插入图片描述
我们还可以得到假正例率(False Positive Rate , FPR):
在这里插入图片描述
一个TPR(y轴)-FPR(x轴)的相关图:
在这里插入图片描述
引自《machine learning:A Probabilistic Perspective》
利用ROC的其他评估标准
AUC(area under thecurve),也就是ROC曲线的下夹面积,越大说明分类器越好,最大值是1,图中的蓝色条纹区域面积就是蓝色曲线对应的 AUC
EER(equal error rate),也就是FPR=FNR的值,由于 F N R = 1 − F N T P + F N = T P R FNR=1- \frac{FN}{TP+FN}=TPR FNR=1TP+FNFN=TPR , T P R = − F N R + 1 TPR =-FNR+1 TPR=FNR+1 可以画一条从(0,1)到(1,0)的直线,此时的横坐标是FNR。找到交点,图中的A、B两点。曲线交点对应的横坐标是FPR,黑色直线对应的横坐标是FNR,此时FPR等于FNR,假正率等于假负率,即等误率(EER)

Application

在这里插入图片描述
引自:《DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition》

REFERENCES

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值