(Medical segmentatiion) Basic Concepts 基本概念
Semantic Segmentation Loss Functions 语义分割损失函数
-
BCEWithLogitsLoss
(binary cross-entropy): 二元交叉熵 -
- 可以指定权重,用于解决类别不平衡问题(one can specify class weights via the weight: [w_1, …, w_k] in the loss section of the config. This is useful when the dataset is imbalanced, i.e. one class has at least 3 orders of magnitude more voxels than the others)
-
- 笔记:当输入非常大或非常小时,此损失函数在数值上不稳定。使用BCELoss代替。
-
- 公式:
-
− 1 N ∑ i = 1 N ( y i log ( σ ( x i ) ) + ( 1 − y i ) log ( 1 − σ ( x i ) ) ) -\frac{1}{N}\sum_{i=1}^{N} (y_i \log(\sigma(x_i)) + (1-y_i) \log(1-\sigma(x_i))) −N1i=1∑N(yilog(σ(xi))+(1−yi)log(1−σ(xi)))
-
DiceLoss
: Dice损失 -
- 标准的DiceLoss用于二元语义分割:standard DiceLoss defined as 1 - DiceCoefficient used for binary semantic segmentation;
-
- 当真值中存在2个以上的类别时:when more than 2 classes are present in the ground truth, it computes the DiceLoss per channel and averages the values
-
- 公式:
-
1 − 2 T P 2 T P + F P + F N 1-\frac{2TP}{2TP+FP+FN} 1−2TP+FP+FN2TP
-
BCEDiceLoss
: BCE和Dice损失 -
- 线性组合的BCE和Dice损失:Linear combination of BCE and Dice losses, i.e. alpha * BCE + beta * Dice, alpha, beta can be specified in the loss section of the config.(beta可以在配置的损失部分中指定。)
-
- 公式:
-
− 1 N ∑ i = 1 N ( y i log ( σ ( x i ) ) + ( 1 − y i ) log ( 1 − σ ( x i ) ) ) + ( 1 − 2 T P 2 T P + F P + F N ) -\frac{1}{N}\sum_{i=1}^{N} (y_i \log(\sigma(x_i)) + (1-y_i) \log(1-\sigma(x_i))) + (1-\frac{2TP}{2TP+FP+FN}) −N1i=1∑N(yilog(σ(xi))+(1−yi)log(1−σ(xi)))+(1−2TP+FP+FN2TP)
-
CrossEntropyLoss
: 交叉熵损失 -
- one can specify class weights via the weight: [w_1, …, w_k] in the loss section of the config
-
- 笔记:当输入非常大或非常小时,此损失函数在数值上不稳定。使用NLLLoss代替。
-
- 公式:
-
− 1 N ∑ i = 1 N ∑ j = 1 k y i j log ( σ ( x i j ) ) -\frac{1}{N}\sum_{i=1}^{N} \sum_{j=1}^{k} y_{ij} \log(\sigma(x_{ij})) −N1i=1∑Nj=1∑kyijlog(σ(xij))
-
PixelWiseCrossEntropyLoss
: 交叉熵损失(像元尺度) -
- one can specify per pixel weights in order to give more gradient to the important/under-represented regions in the ground truth
-
- 公式:
-
− 1 N ∑ i = 1 N ∑ j = 1 k w i j y i j log ( σ ( x i j ) ) -\frac{1}{N}\sum_{i=1}^{N} \sum_{j=1}^{k} w_{ij} y_{ij} \log(\sigma(x_{ij})) −N1i=1∑Nj=1∑kwijyijlog(σ(xij))
-
WeightedCrossEntropyLoss
: 交叉熵损失(加权) -
- see ‘Weighted cross-entropy (WCE)’ in the below paper for a detailed explanation
-
- 公式:
-
− 1 N ∑ i = 1 N ∑ j = 1 k w j y i j log ( σ ( x i j ) ) -\frac{1}{N}\sum_{i=1}^{N} \sum_{j=1}^{k} w_j y_{ij} \log(\sigma(x_{ij})) −N1i=1∑Nj=1∑kwjyijlog(σ(xij))
-
GeneralizedDiceLoss
: 一般化的Dice损失 -
- Note: use this loss function only if the labels in the training dataset are very imbalanced e.g. one class having at least 3 orders of magnitude more voxels than the others. Otherwise use standard DiceLoss.
-
- 笔记:如果训练数据集中的标签非常不平衡(例如,一个类别的体素数量比其他类别的体素数量多至少3个数量级),则仅在这种情况下使用此损失函数。否则使用标准的DiceLoss。
-
- 公式:
-
1 N ∑ i = 1 N ∑ j = 1 k 2 ∣ X i ∩ Y i ∣ ∣ X i ∣ + ∣ Y i ∣ \frac{1}{N}\sum_{i=1}^{N} \sum_{j=1}^{k} \frac{2|X_i \cap Y_i|}{|X_i|+|Y_i|} N1i=1∑Nj=1∑k∣Xi∣+∣Yi∣2∣Xi∩Yi∣
For a detailed explanation of some of the supported loss functions see: Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations.
Semantic Segmentation Evaluation Metrics 语义分割评估指标
-
MeanIoU
: 平均交并比 -
- mean intersection over union
-
- 公式:
-
1 N ∑ i = 1 N T P T P + F P + F N \frac{1}{N}\sum_{i=1}^{N} \frac{TP}{TP+FP+FN} N1i=1∑NTP+FP+FNTP
-
DiceCoefficient
: Dice系数 -
- computes per channel Dice Coefficient and returns the average.
-
- 公式:
-
2 T P 2 T P + F P + F N \frac{2TP}{2TP+FP+FN} 2TP+FP+FN2TP
-
ConfusionMatrix
: 混淆矩阵Ground Truth Prediction TP FP FN TN 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 0 -
- TP: True Positive, FP: False Positive, FN: False Negative, TN: True Negative
-
- TP: 真正例,FP: 假正例,FN: 假负例,TN: 真负例
-
- precision: 精确度 = TP / (TP + FP)
-
- recall: 召回率 = TP / (TP + FN)
-
F1 Score
: F1分数 -
- 公式:
-
2 T P 2 T P + F P + F N \frac{2TP}{2TP+FP+FN} 2TP+FP+FN2TP
-
- F1 Score is the harmonic mean of precision and recall
-
- F1分数是精确度和召回率的调和平均值
-
- 公式:
-
2 1 p r e c i s i o n + 1 r e c a l l \frac{2}{\frac{1}{precision}+\frac{1}{recall}} precision1+recall12
-
- F1 score VS. Dice Coefficient: F1 score is more sensitive to false positives, while Dice Coefficient is more sensitive to false negatives. In general, F1 score is more robust to class imbalance.
-
- F1分数 VS. Dice系数:F1分数对假正例更敏感,而Dice系数对假负例更敏感。一般来说,F1分数对类别不平衡更加稳健。