多分类交叉验证模型评估指标(精度,召回率和f1分数)及混淆矩阵

ValueError: For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. [‘accuracy’, ‘precision_macro’, ‘recall_macro’, ‘f1_macro’] was passed.

ValueError: For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. ['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'] was passed.

1.多分类交叉验证精度,召回率和f1分数及混淆矩阵代码:

import sklearn
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
# from sklearn.metrics import make_scorer, accuracy_score, precision_score, recall_score, f1_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix


iris=load_iris()
# iris_X=iris.data
# iris_y=iris.target

from sklearn.model_selection import cross_validate
from sklearn.metrics import recall_score

#交叉验证
from sklearn.model_selection import cross_val_score
clf = KNeighborsClassifier(n_neighbors=5)
scoring = ['accuracy','precision_macro', 'recall_macro','f1_macro']#设置评分项
scores = cross_validate(clf, iris.data, iris.target, scoring=scoring, cv=5, return_train_score=False)
# print(sorted(scores.keys()))
print(sorted(sklearn.metrics.SCORERS.keys()))

print("Accuracy (Testing):  %0.2f (+/- %0.2f)" % (scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2))
print("Precision (Testing):  %0.2f (+/- %0.2f)" % (scores['test_precision_macro'].mean(), scores['test_precision_macro'].std() * 2))
print("Recall (Testing):  %0.2f (+/- %0.2f)" % (scores['test_recall_macro'].mean(), scores['test_recall_macro'].std() * 2))
print("F1-Score (Testing):  %0.2f (+/- %0.2f)" % (scores['test_f1_macro'].mean(), scores['test_f1_macro'].std() * 2))

y_pred = cross_val_predict(clf,iris.data, iris.target, cv=5)
print(confusion_matrix(iris.target, y_pred,sample_weight=None))

输出结果:

['accuracy', 'adjusted_mutual_info_score', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'completeness_score', 'explained_variance', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'fowlkes_mallows_score', 'homogeneity_score', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted', 'max_error', 'mutual_info_score', 'neg_brier_score', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_gamma_deviance', 'neg_mean_poisson_deviance', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_median_absolute_error', 'neg_root_mean_squared_error', 'normalized_mutual_info_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc', 'roc_auc_ovo', 'roc_auc_ovo_weighted', 'roc_auc_ovr', 'roc_auc_ovr_weighted', 'v_measure_score']
Accuracy (Testing):  0.97 (+/- 0.05)
Precision (Testing):  0.98 (+/- 0.04)
Recall (Testing):  0.97 (+/- 0.05)
F1-Score (Testing):  0.97 (+/- 0.05)
[[50  0  0]
 [ 0 47  3]
 [ 0  1 49]]

2.模型评估指标:

import sklearn
sorted(sklearn.metrics.SCORERS.keys())

[‘accuracy’,
‘adjusted_mutual_info_score’,
‘adjusted_rand_score’,
‘average_precision’,
‘balanced_accuracy’,
‘completeness_score’,
‘explained_variance’,
‘f1’,
‘f1_macro’,
‘f1_micro’,
‘f1_samples’,
‘f1_weighted’,
‘fowlkes_mallows_score’,
‘homogeneity_score’,
‘jaccard’,
‘jaccard_macro’,
‘jaccard_micro’,
‘jaccard_samples’,
‘jaccard_weighted’,
‘max_error’,
‘mutual_info_score’,
‘neg_brier_score’,
‘neg_log_loss’,
‘neg_mean_absolute_error’,
‘neg_mean_gamma_deviance’,
‘neg_mean_poisson_deviance’,
‘neg_mean_squared_error’,
‘neg_mean_squared_log_error’,
‘neg_median_absolute_error’,
‘neg_root_mean_squared_error’,
‘normalized_mutual_info_score’,
‘precision’,
‘precision_macro’,
‘precision_micro’,
‘precision_samples’,
‘precision_weighted’,
‘r2’,
‘recall’,
‘recall_macro’,
‘recall_micro’,
‘recall_samples’,
‘recall_weighted’,
‘roc_auc’,
‘roc_auc_ovo’,
‘roc_auc_ovo_weighted’,
‘roc_auc_ovr’,
‘roc_auc_ovr_weighted’,
‘v_measure_score’]

3.classification_report用法示例:

from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))

输出:

             precision    recall  f1-score   support

    class 0       0.50      1.00      0.67         1
    class 1       0.00      0.00      0.00         1
    class 2       1.00      0.67      0.80         3

avg / total       0.70      0.60      0.61         5

其中列表左边的一列为分类的标签名,右边support列为每个标签的出现次数.avg / total行为各列的均值(support列为总和).
precision recall f1-score三列分别为各个类别的精确度/召回率及 F1值.

  • 1
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值