目录
6. 分类报告(classification_report)
https://hg95.github.io/sklearn-notes/Chapter2/https://hg95.github.io/sklearn-notes/Chapter2/
1. 准确率(accuracy_score)
准确率是所有预测正确的样本占总样本的比例。公式为:(TP + TN) / (TP + FP + TN + FN)。它衡量了分类器整体上的性能,但在不平衡数据集上可能不够准确。
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
#accuracy_score(y_test, y_pred, normalize=True, sample_weight=None)
调参参考:
2. 精确率(precision_score)
精确率是指预测为正例的样本中真正为正例的比例。公式为:TP / (TP + FP)。它反映了预测为正例的样本中有多少是真正的正例。
from sklearn.metrics import precision_score
precision = precision_score(y_test, y_pred, average='macro')
#precision_score(y_test, y_pred, labels=None, pos_label=1, average='binary', sample_weight=None)
调参参考:https://blog.csdn.net/qq_41289920/article/details/104414878
3. 召回率(recall_score)
召回率(也称为真正率或灵敏度)是指所有真正的正例中被正确预测出来的比例。公式为:TP / (TP + FN)。它衡量了分类器找出所有正例的能力。
from sklearn.metrics import recall_score
recall = recall_score(y_test, y_pred, average='macro')
#recall_score(y_test, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn')
调参参考:https://vimsky.com/examples/usage/python-sklearn.metrics.recall_score-sk.html
4. F1值(f1_score)
F1值是精确率和召回率的调和平均数,用于综合评价分类器的性能。公式为:2 * (Precision * Recall) / (Precision + Recall)。当精确率和召回率都很高时,F1值也会很高。
from sklearn.metrics import f1_score
f1 = f1_score(y_test, y_pred, average='macro')
#f1_score(y_test, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn')
调参参考:https://blog.csdn.net/qq_40671063/article/details/130447922
5. 混淆矩阵(confusion_matrix)
混淆矩阵是一个表格,用于描述分类模型的实际分类与预测分类之间的关系。它通常包含四个部分:真正例(True Positive, TP)、假正例(False Positive, FP)、真反例(True Negative, TN)和假反例(False Negative, FN)。
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
#confusion_matrix(y_test, y_pred, labels=None, sample_weight=None)
调参参考:https://blog.csdn.net/SartinL/article/details/105844832
6. 分类报告(classification_report)
分类报告是一个包含准确率、精确率、召回率和F1值等指标的详细报告,用于全面评估分类器的性能。它通常针对每个类别都提供这些指标的值。
from sklearn.metrics import classification_report
cr = classification_report(y_test, y_pred))
#classification_report(y_test, y_pred, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False, zero_division=“warn”)
调参参考:https://blog.csdn.net/weixin_48964486/article/details/122881350