前一篇文章中,对
prob = pd.DataFrame(clf.decision_function(Xtest))
prob.loc[prob.iloc[:,0] >= 0.13157937002761821,"y_pred"] = 1
prob.loc[prob.iloc[:,0] < 0.13157937002761821,"y_pred"] = 0
有疑问,为什么比阈值大的标签就是1,反之就是0,本篇文章就是来探讨这个问题,并对decision_function会有更详细的解释
首先我们先看二分类的数据,为了便于展示,我们使用二维数据
from sklearn.svm import SVC
import numpy as np
from sklearn.datasets import make_classification
import matplotlib.pyplot as plt
import pandas as pd
X,y = make_classification(n_samples=10
,n_features=2
,n_informative=2
,n_redundant=0
,n_classes=2
,n_clusters_per_class=1
,random_state=1)
plt.scatter(X[:,0],X[:,1],c=y)
这里为decision_function更好理解,所以我们要画出分离超平面和决策边界
X_min,X_max = X[:,0].min()-.5,X[:,0].max()+.5
Y_min,Y_max = X[:,1].min()-.5,X[:,1].max()+.5
xx,yy = np.meshgrid(np.linspace(X_min,X_max,30),np.linspace(Y_min,Y_max,30))
xy = np.c_[xx.ravel(),yy.ravel()]
clf = SVC(kernel="linear",probability=True).fit(X,y)
Z = clf.decision_function(xy).reshape(xx.shape)
plt