Evaluate ML Models Notes

Evaluate Models

Dummy Models

Manually assign the target values to the target while ignoring any feature. There are multiple ways: 1. Assign the most frequent 2. Choose the preferred result 3. Stratified prediction

It serves as the baseline to be compared with other normal models.

from sklearn.dummy import DummyClassifier

dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
y_dummy_predictions = dummy_majority.predict(X_test)

dummy_majority.score(X_test, y_test)

Confusion Matrix

Investigate the false positive and false negative rate.

from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report

classification_report(y_test, tree_predicted)
confusion = confusion_matrix(y_test, y_predicted)

Sensitivity/True Positive Rate/Recall: How many positive cases are identified?
TP/(TP+FN)

Precison: How many positive cases are correct?
TP/(TP+FP)

False Positive Rate/Specificity: What fraction of all negative cases are incorrectly identified as positive?
FP/(TN+FP)

A higher Sensitivity will inevitably lead to a lower Precision, vice versa.

F score can combine precision and recall into a single number.

Curves

Precision-Recall Curves:
X axis: Precision
Y axis: Recall

Top right corner is the ideal point (precision = 1, recall = 1)
在这里插入图片描述
ROC Curves
X axis: False Positive Rate
Y axis: True Positive Rate

Top left corner is the ideal point so that False positive rate is zero and true positive rate is one.

from sklearn.metrics import precision_recall_curve

precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]

plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')

###############################################################
###############################################################
from sklearn.metrics import roc_curve, auc

X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)

y_score_lr = lr.fit(X_train, y_train).decision_function(X_test)
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)

plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes().set_aspect('equal')
plt.show()

Multi-Class Evaluation

Confusion Matrix (one class case against all the other).

Macro Precision: Each class’s precison will be added together to achive the average, and there is no weight.
Micro Precision: The overall precision of the dataset.

Macro Precision is low. Examine small classes.
Micro Precision is low. Examine large classes.

Adjust the average variable to calculate macro or micro value.

from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

precision_score(y_test_mc, svm_predicted_mc, average = 'micro')
precision_score(y_test_mc, svm_predicted_mc, average = 'macro')

GridSearch

from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC

## cross_val_score
cross_val_score(clf, X, y, cv=5,scoring = 'roc_auc')

## Grid Search
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score

dataset = load_digits()
X, y = dataset.data, dataset.target == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

clf = SVC(kernel='rbf')
grid_values = {'gamma': [0.001, 0.01, 0.05, 0.1, 1, 10, 100]}

# default metric to optimize over grid parameters: accuracy
grid_clf_acc = GridSearchCV(clf, param_grid = grid_values)
grid_clf_acc.fit(X_train, y_train)
y_decision_fn_scores_acc = grid_clf_acc.decision_function(X_test) 

print('Grid best parameter (max. accuracy): ', grid_clf_acc.best_params_)
print('Grid best score (accuracy): ', grid_clf_acc.best_score_)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值