kaggle 各种评价指标之二 :Error Metrics for Classification Problems 分类问题错误度量

基本上必须看一遍,顺便简单翻译一下:(暂时留着,持续更新ing)


Error Metrics for Classification Problems  分类问题错误度量

Logarithmic Loss


对数损失


The logarithm of the likelihood function for a Bernoulli random distribution.

In plain English, this error metric is used where contestants have to predict that something is true or false with a probability (likelihood) ranging from definitely true (1) to equally true (0.5) to definitely false(0).

The use of log on the error provides extreme punishments for being both confident and wrong. In the worst possible case, a single prediction that something is definitely true (1) when it is actually false will add infinite to your error score and make every other entry pointless. In Kaggle competitions, predictions are bounded away from the extremes by a small value in order to prevent this.

logloss=1Ni=1Nj=1Myijlog(pij)

where N is the number of examples, M is the number of classes, and $y_{ij}$ is a binary variable indicating whether class j was correct for example i. In the case where the number of classes is 2 (M=2) then the formula simplies to:

logloss=1Ni=1N(yilog(pi)+(1yi)log(1pi))

Python Code

import scipy as sp
def logloss(act, pred):
    epsilon = 1e-15
    pred = sp.maximum(epsilon, pred)
    pred = sp.minimum(1-epsilon, pred)
    ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
    ll = ll * -1.0/len(act)
    return ll
language: python

R Code

MultiLogLoss <- function(act, pred){
  eps <- 1e-15
  pred <- pmin(pmax(pred, eps), 1 - eps)
  sum(act * log(pred) + (1 - act) * log(1 - pred)) * -1/NROW(act)
}
language: matlab

Sample usage Example in Python

pred = [1,0,1,0]
act = [1,0,1,0]
print(logloss(act,pred))
error: nan,RuntimeWarning: divide by zero encountered in log
please solve this error kaggle people
language: haskell

Sample usage Example in R

pred1 = c(0.8,0.2)
pred2 = c(0.6,0.4)
pred <- rbind(pred1,pred2)
pred
act1 <- c(1,0)
act2 <- c(1,0)
act <- rbind(act1,act2)
MultiLogLoss(act,pred)


Mean Consequential Error (MCE)

The mean/average of the "Consequential Error", where all errors are equally bad (1) and the only value that matters is an exact prediction (0).

MCE=1nyiy^i1

Matlab code:

MCE= mean(logical(y-y_pred));



MAP n

Introduction

Parameters: n

Suppose there are m missing outbound edges from a user in a social graph, and you can predict up to n other nodes that the user is likely to follow. Then, by adapting the definition of average precision in IR (http://en.wikipedia.org/wiki/Information_retrieval, http://sas.uwaterloo.ca/stats_navigation/techreports/04WorkingPapers/2004-09.pdf), the average precision at n for this user is:

ap@n=k=1nP(k)/min(m,n)

where P(k) means the precision at cut-off k in the item list, i.e., the ratio of number of recommended nodes followed, up to the position k, over the number k; P(k) equals 0 when the k-th item is not followed upon recommendation; m is the number of relevant nodes; n is the number of predicted nodes. If the denominator is zero, P(k)/min(m,n) is set to zero.

(1) If the user follows recommended nodes #1 and #3 along with another node that wasn't recommended, then ap@10 = (1/1 + 2/3)/3 ≈ 0.56

(2) If the user follows recommended nodes #1 and #2 along with another node that wasn't recommended, then ap@10 = (1/1 + 2/2)/3 ≈ 0.67

(3) If the user follows recommended nodes #1 and #3 and has no other missing nodes, then ap@10 = (1/1 + 2/3)/2 ≈ 0.83

The mean average precision for N users at position n is the average of the average precision of each user, i.e.,

MAP@n=i=1Nap@ni/N

Note this means that order matters. But it depends. Order matters only if there is at least one incorrect prediction. In other words, if all predictions are correct, it doesn't matter in which order they are given.

Thus, if you recommend two nodes A & B in that order and a user follows node A and not node B, your MAP@2 score will be higher (better) than if you recommended B and then A. This makes sense - you want the most relevant results to show up first. Consider the following examples:

(1) The user follows recommended nodes #1 and #2 and has no other missing nodes, then ap@2 = (1/1 + 1/1)/2 = 1.0

(2) The user follows recommended nodes #2 and #1 and has no other missing nodes, then ap@2 = (1/1 + 1/1)/2 = 1.0

(3) The user follows node #1 and it was recommended first along with another node that wasn't recommended, then ap@2 = (1/1 + 0)/2 = 0.5

(4) The user follows node #1 but it was recommended second along with another node that wasn't recommend, then ap@2 = (0 + 1/2)/2 = 0.25

So, it is better to submit more certain recommendations first. AP score reflects this.

Here's an easy intro to MAP: http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/

Here's another intro to MAP from our forums.

Sample Implementations

Contests that used MAP@K

Article needs:

  • explanation
  • formula
  • example solution & submission files


只有一个R 代码

R code

multiloss <- function(predicted, actual){
  #to add: reorder the rows

  predicted_m <- as.matrix(select(predicted, -device_id))
  # bound predicted by max value
  predicted_m <- apply(predicted_m, c(1,2), function(x) max(min(x, 1-10^(-15)), 10^(-15)))

  actual_m <- as.matrix(select(actual, -device_id))
  score <- -sum(actual_m*log(predicted_m))/nrow(predicted_m)

  return(score)
}

补充来自scikitlearn:

逻辑斯蒂损失或者交叉熵损失

Log loss, aka logistic loss or cross-entropy loss.

This is the loss function used in (multinomial) logistic regressionand extensions of it such as neural networks, defined as the negativelog-likelihood of the true labels given a probabilistic classifier’spredictions. The log loss is only defined for two or more labels.For a single sample with true label yt in {0,1} andestimated probability yp that yt = 1, the log loss is

-log P(yt|yp) = -(yt log(yp) + (1 - yt) log(1 - yp))




The Hamming Loss measures accuracy in a multi-label classification task. The formula is given by: 
汉明损失 用于多分类问题预测

HammingLoss(xi,yi)=1|D|i=1|D|xor(xi,yi)|L|,

where

|D|   is the number of samples

|L| is the number of labels
yi is the ground truth
xi is the prediction.

Mean Utility


带权重的true positives, true negatives, false positives, and false negatives.

Mean Utility is the weighted sum of true positives, true negatives, false positives, and false negatives. There are 4 parameters, each are the weights.

The Mean Utility score is given by

MeanUtility=wtptp+wtntn+wfpfp+wfnfn,wherewtp,wtn>0,wfp,wfn<0

Kaggle's implementation of Mean Utility is directional, which means higher values are better.


kaggle未定义


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值