数据挖掘 —— 模型评估

1.分类模型评估(一)

1.1 二分类模型

  • 一般情况下更关注正类
  • 混淆矩阵:
  1. TP(TruePositive):正确的正类

  2. FN(FalseNegative):错误的负类

  3. FP(FalseNegative):错误的正类

  4. TN(TrueNegative):正确的负类

         TN     FP
         FN     TP      
    
  • 关键指标
  1. Accuracy Rate(准确率):(TP+TN)/(TP+TN+FP+FN)

  2. TPR(True Positive Rate)/(Recall Rate)召回率: TP/(TP+FN)

  3. F-measure(F值):2RecallAccuracy/(Recall + Accuracy) 调和平均值

  4. Precision Rate(查准率): TP/(TP+FP)

  5. FPR(False posotive Rate)(错误接受率): (FP)/(FP+TN)

  6. FRR(False Reject Rate)(错误拒绝率): FN/(TP+FN)

1.2 多分类模型

                    预测分类
             Y1    Y2   ...   Yn
真实分类: Y1
         Y2
         ...
         Yn
  • 关键指标:
  1. 准确率:保持不变
  2. 召回率和F值:两种思路
    (1)、先计算所有的TP、FN,加起来,再以二值方式计算(微平均)
    (2)、分别把每个类当做正类,都算一个召回率或者F值,然后取加权或者不加权的平均(宏平均)

1.3 代码

# sklearn实现:
from sklearn.datasets import load_iris
from sklearn.metrics import recall_score,accuracy_score,f1_score,precision_score
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import pandas as pd
data  = load_iris()
X = data["data"]
Y = data["target"]

X_train,X_test,Y_train,Y_test = train_test_split(X,Y)
knn_model = KNeighborsClassifier(n_neighbors = 6)
knn_model.fit(X_train,Y_train)
Y_predict= knn_model.predict(X_test)
print("*"*8,"metrics","*"*8)
print("ACC:",accuracy_score(Y_test,Y_predict))
print("recall for micro:",recall_score(Y_test,Y_predict,average="micro"))
print("recall for macro:",recall_score(Y_test,Y_predict,average="macro"))
print("f1 for micro:",f1_score(Y_test,Y_predict,average="micro"))
print("f1 for macro:",f1_score(Y_test,Y_predict,average="macro"))
******** metrics ********
ACC: 0.9736842105263158
recall for micro: 0.9736842105263158
recall for macro: 0.9629629629629629
f1 for micro: 0.9736842105263158
f1 for macro: 0.9696394686907022

2.分类模型评估(二)

2.1 ROC 与 AUC

  • ROC : Receiver Operating Characteristic Curve
  • AUC: Area under Curve

2.2 代码

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import os
os.environ["PATH"] += os.pathsep + "D://bin/"

# 准备数据
features = pd.read_excel("./data.xlsx",sheet_name = "features",header = 0)
label = pd.read_excel("./data.xlsx",sheet_name = "label",header = 0)

# 训练集 验证集 测试集 拆分
def data_split(X,Y):
    X_tt,X_validation,Y_tt,Y_validation = train_test_split(X,Y,test_size = 0.2)
    X_train,X_test,Y_train,Y_test = train_test_split(X_tt,Y_tt,test_size = 0.25)
    return X_train,Y_train,X_validation,Y_validation,X_test,Y_test
X_train,Y_train,X_validation,Y_validation,X_test,Y_test = data_split(features.values,label.values)

# 搭建神经网络模型
from keras.models import Sequential
# 搭建模型框架
nn_model = Sequential()
# 添加输入层
from keras.layers import Dense,Activation
nn_model.add(Dense(50,input_dim = len(X_train[0])))
nn_model.add(Activation("sigmoid"))
# 添加隐含层
nn_model.add(Dense(10))
nn_model.add(Activation("sigmoid"))
# 添加输出层
nn_model.add(Dense(2))
nn_model.add(Activation("softmax"))

# 神经网络编译
from keras.optimizers import SGD,Adam
sgd = SGD(lr = 0.51) # lr为学习率
adam = Adam(lr = 0.01)
nn_model.compile(loss = "mean_squared_error",optimizer = adam)
"""
改用亚当优化器 效果更好 adam
"""

# 神经网络训练
Y_train_nn = np.array([[1,0] if i ==0 else [0,1] for i in Y_train])
nn_model.fit(X_train,Y_train_nn,nb_epoch = 1000,batch_size = 4000)# nb_epoch 为最大迭代次数,bath_size为随机样本数量

# 使用模型预测
validation_predict = nn_model.predict_classes(X_validation)
test_predict = nn_model.predict_classes(X_test)
train_predict = nn_model.predict_classes(X_train)

# 预测效果检验
def model_metrics(x1,x2,name):
    from sklearn.metrics import f1_score,recall_score,accuracy_score,precision_score
    print(name,":")
    print("\tf1_score",f1_score(x1,x2))
    print("\taccuracy_score",accuracy_score(x1,x2))
    print("\trecall_score",recall_score(x1,x2))
    print("\tprecision_score",precision_score(x1,x2))
    
model_metrics(train_predict,Y_train,"训练集")
model_metrics(validation_predict,Y_validation,"验证集")
model_metrics(test_predict,Y_test,"测试集")

# 预测训练集属于正类的概率值
Y_predict_test = nn_model.predict(X_test)
Y_predict_test = Y_predict_test[:,1]

from sklearn.metrics import roc_curve,auc,roc_auc_score
import matplotlib.pyplot as plt

# 计算 fpr(错误接受率)和TPR(召回率)
fpr,tpr,threshold = roc_curve(Y_test,Y_predict_test) # threshold为阈值
plt.plot(fpr,tpr)
plt.xlabel("fpr")
plt.ylabel("tpr")
plt.show()

print("AUC",auc(fpr,tpr))
print("AUC",roc_auc_score(Y_test,Y_predict_test))

by CyrusMay 2022 04 06

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值