置信概率:
根据样本与分类边界的距离远近,对其预测类别的可信程度进行量化,离边界越近的样本,置信概率越低,反之,离边界越远的样本,置信概率高
代码实现:
import numpy as np
import pandas as pd
import sklearn.svm as svm
import sklearn.model_selection as ms
import sklearn.metrics as sm
import matplotlib.pyplot as plt
data = pd.read_csv('C:/Users/81936/Desktop/balance.txt', delimiter=",")
data = np.array(data)
data1 = data[:, :-1]
data2 = data[:, -1]
data3 = []
for i in data2:
if i==' R':
data3.append(1)
if i==' B':
data3.append(2)
if i==' L':
data3.append(3)
data3 = np.array(data3)
data = np.column_stack((data1, data3))
x = np.array(data[:, :-1], dtype=float)
y = np.array(data[:, -1], dtype=float)
# 利用sklearn划分训练集和测试集
# ms.train_test_split() x和y是输入和输出,test_size用于确定划分测试集的比例,random_state为随机种子(用于确保每次划分的都一样)
train_x, test_x, train_y, test_y = ms.train_test_split(x, y, test_size=0.25, random_state=1)
# 构建svm分类器
# probability=True 可以获得置信概率
model = svm.SVC(kernel='rbf', C=900, gamma=0.01, probability=True)
model.fit(train_x, train_y)
# 输出模型的预测效果
pred_test_y = model.predict(test_x)
acc = (pred_test_y == test_y).sum() / test_y.size
print(acc)
# 新建一些点
prob_x = np.array([[2, 1, 4, 4],
[2, 2, 1, 1],
[1, 2, 2, 1]])
pre_prob_y = model.predict(prob_x)
pre_prob_y
# 获得新加3个样本的置信概率
probs = model.predict_proba(prob_x)
print(probs)
[[9.99999850e-01 1.00000015e-07 5.00000075e-08]
[9.24439482e-07 1.10808859e-06 9.99997967e-01]
[1.41564896e-01 6.97792182e-01 1.60642922e-01]]