图像语义分割指标及keras/pytorch/tensorflow复现

一、指标分类

1- Accuracy(准确率)

2- Precision(精确率)

3- Recall(召回率)

4- F1-Score

5- Fn-Score

定义:对于一个二分类问题,我们可以将实际值与预测值做一个组合,得到以下这四种结果:
在这里插入图片描述
TP:那些实际上为正向的标签,你预测它为正向;(预测正确)

TN:那些实际上为负向的标签,你预测它为负向;(预测正确)

FP:那些实际上为正向的标签,你把它预测为负向;(预测错误)

FN:那些实际上为负向的标签,你把它预测为正向;(预测错误)
下面给出各个指标的具体计算公式:

1- Accuracy 是衡量模型对数据集正确预测的准确程度,即label为True 的它判定为True,为False的判断为False,具体公式如下:

在这里插入图片描述

2- Precision,也称为查准率,表示的是你预测了这么多正类,实际上为True的样本占的比例是多少,它常作为推荐系统的衡量指标,具体公式如下:

在这里插入图片描述

3- Recall,也称为查全率,表示的是在实际所有为True的样本中,你预测的那些为True的样本占的比例是多少,比如有100个零件,其中有50个是合格零件,你在做预测的时候判断这100个零件中有30个是合格的零件,那么你的查全率就是:30/50*100%=0.6,具体公式如下:

在这里插入图片描述

4- F1-Score,是精确率和召回率的调和平均数,它认为这两个指标都同等重要,于是其综合衡量了两者的效果,具体公式如下:

在这里插入图片描述

  • 其值为0时,表示两个指标中至少有一个值接近0,即该模型很差;
  • 其值为1时,表示两个指标都接近1,即该模型很好。

5- Fn-Score,其认为召回率的重要程度是精确率的n倍,当n为1时,即时F1-Score,其通用公式如下所示:

在这里插入图片描述

  • 当β=2时,称为F2-Score,表示Recall的影响要大于precision;
  • 当β=0.5时,称为F0.5-Score,表示Recall的影响要小于precisio.

二、代码复现

1.keras

def cal_base(y_true, y_pred):
    y_pred_positive = K.round(K.clip(y_pred, 0, 1))
    y_pred_negative = 1 - y_pred_positive

    y_positive = K.round(K.clip(y_true, 0, 1))
    y_negative = 1 - y_positive

    TP = K.sum(y_positive * y_pred_positive)
    TN = K.sum(y_negative * y_pred_negative)

    FP = K.sum(y_negative * y_pred_positive)
    FN = K.sum(y_positive * y_pred_negative)

    return TP, TN, FP, FN


def acc(y_true, y_pred):
    TP, TN, FP, FN = cal_base(y_true, y_pred)
    ACC = (TP + TN) / (TP + FP + FN + TN + K.epsilon())
    return ACC


def sensitivity(y_true, y_pred):
    """ recall """
    TP, TN, FP, FN = cal_base(y_true, y_pred)
    SE = TP/(TP + FN + K.epsilon())
    return SE


def precision(y_true, y_pred):
    TP, TN, FP, FN = cal_base(y_true, y_pred)
    PC = TP/(TP + FP + K.epsilon())
    return PC


def specificity(y_true, y_pred):
    TP, TN, FP, FN = cal_base(y_true, y_pred)
    SP = TN / (TN + FP + K.epsilon())
    return SP


def f1_socre(y_true, y_pred):
    SE = sensitivity(y_true, y_pred)
    PC = precision(y_true, y_pred)
    F1 = 2 * SE * PC / (SE + PC + K.epsilon())
    return F1

 2.pytorch

"""
reference from: https://github.com/LeeJunHyun/Image_Segmentation/blob/master/evaluation.py
"""

import torch

# SR : Segmentation Result
# GT : Ground Truth

def get_accuracy(SR,GT,threshold=0.5):
    SR = SR > threshold
    GT = GT == torch.max(GT)
    corr = torch.sum(SR==GT)
    tensor_size = SR.size(0)*SR.size(1)*SR.size(2)*SR.size(3)
    acc = float(corr)/float(tensor_size)

    return acc

def get_sensitivity(SR,GT,threshold=0.5):
    # Sensitivity == Recall
    SR = SR > threshold
    GT = GT == torch.max(GT)

    # TP : True Positive
    # FN : False Negative
    TP = ((SR==1)+(GT==1))==2
    FN = ((SR==0)+(GT==1))==2

    SE = float(torch.sum(TP))/(float(torch.sum(TP+FN)) + 1e-6)     
    
    return SE

def get_specificity(SR,GT,threshold=0.5):
    SR = SR > threshold
    GT = GT == torch.max(GT)

    # TN : True Negative
    # FP : False Positive
    TN = ((SR==0)+(GT==0))==2
    FP = ((SR==1)+(GT==0))==2

    SP = float(torch.sum(TN))/(float(torch.sum(TN+FP)) + 1e-6)
    
    return SP

def get_precision(SR,GT,threshold=0.5):
    SR = SR > threshold
    GT = GT == torch.max(GT)

    # TP : True Positive
    # FP : False Positive
    TP = ((SR==1)+(GT==1))==2
    FP = ((SR==1)+(GT==0))==2

    PC = float(torch.sum(TP))/(float(torch.sum(TP+FP)) + 1e-6)

    return PC

def get_F1(SR,GT,threshold=0.5):
    # Sensitivity == Recall
    SE = get_sensitivity(SR,GT,threshold=threshold)
    PC = get_precision(SR,GT,threshold=threshold)

    F1 = 2*SE*PC/(SE+PC + 1e-6)

    return F1

def get_JS(SR,GT,threshold=0.5):
    # JS : Jaccard similarity
    SR = SR > threshold
    GT = GT == torch.max(GT)
    
    Inter = torch.sum((SR+GT)==2)
    Union = torch.sum((SR+GT)>=1)
    
    JS = float(Inter)/(float(Union) + 1e-6)
    
    return JS

def get_DC(SR,GT,threshold=0.5):
    # DC : Dice Coefficient
    SR = SR > threshold
    GT = GT == torch.max(GT)

    Inter = torch.sum((SR+GT)==2)
    DC = float(2*Inter)/(float(torch.sum(SR)+torch.sum(GT)) + 1e-6)

    return DC

3.tensorflow 

import tensorflow as tf


class Metrics(object):
    def __init__(self, gt, predict):
        self.gt = gt
        self.pd = predict
        self.epsilon = tf.keras.backend.epsilon()
        self.tp, self.tn, self.fp, self.fn = self.calc_base()

    def get_iou(self):
        """ intersection over union """
        iou = self.tp / (self.tp + self.fp + self.fn + self.epsilon)

        return iou

    def get_dice(self):
        """ dice coefficient """
        dice = (2 * self.tp) / (2 * self.tp + self.fp + self.fn + self.epsilon)

        return dice

    def get_info(self):
        print("ground truth: ", self.gt)
        print("predict: ", self.pd)

    def calc_base(self):
        gt = tf.convert_to_tensor(self.gt)
        pd = tf.convert_to_tensor(self.pd)

        gt_positive = tf.round(tf.clip_by_value(gt, 0, 1))
        gt_negative = 1 - gt_positive

        pd_positive = tf.round(tf.clip_by_value(pd, 0, 1))
        pd_negative = 1 - pd_positive

        tp = tf.reduce_sum(gt_positive * pd_positive)
        tn = tf.reduce_sum(gt_negative * pd_negative)
        fp = tf.reduce_sum(gt_negative * pd_positive)
        fn = tf.reduce_sum(gt_positive * pd_negative)

        return tp, tn, fp, fn

    def get_recall(self):
        """ sensitivity or recall """
        recall = self.tp / (self.tp + self.fn + self.epsilon)

        return recall

    def get_f1_score(self):
        """ f1-score """
        recall = self.get_recall()
        precision = self.get_precision()
        f1_score = 2 * recall * precision / (recall + precision + self.epsilon)

        return f1_score

    def get_accuracy(self):
        """ accuracy """
        accuracy = (self.tp + self.tn) / (self.tp + self.tn + self.fp + self.fn + self.epsilon)

        return accuracy

    def get_precision(self):
        """ precision """
        precision = self.tp / (self.tp + self.fp + self.epsilon)

        return precision

    def get_specificity(self):
        """ specificity """
        specificity = self.tn / (self.tn + self.fp + self.epsilon)

        return specificity

 

  • 2
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 8
    评论
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值