模型可解释性
https://kserve.github.io/website/0.10/modelserving/explainer/explainer/
概念
推理服务讲解员
模型的可解释性回答了一个问题:“为什么我的模型会对给定的实例做出这样的预测”。KServe与Alibi Explainer集成,后者通过为给定实例生成许多外观相似的实例并发送到模型服务器以生成解释来实现黑匣子算法。
此外,KServe还集成了人工智能解释360(AIX360)工具包,这是一个LF人工智能基金会孵化项目,是一个开源库,支持可解释性和数据集和机器学习模型的可解释性。AI可解释性360 Python包包括一套全面的算法,涵盖了不同维度的解释以及代理可解释性指标。除了原生算法外,AIX360还提供来自LIME和Shap的算法。
| Explainer | Examples |
|---|---|
| Deploy Alibi Image Explainer | Imagenet Explainer |
| Deploy Alibi Income Explainer | Income Explainer |
| Deploy Alibi Text Explainer | Alibi Text Explainer |
Alibi
CIFAR10图像分类器解释
我们将使用建立在CIFAR10图像数据集上的Tensorflow分类器来展示对图像数据的解释示例,CIFAR10是一个10类图像数据集。
使用Alibi Explainer创建推理服务
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "cifar10"
spec:
predictor:
tensorflow:
storageUri: "gs://seldon-models/tfserving/cifar10/resnet32"
resources:
requests:
cpu: 0.1
memory: 5Gi
limits:
memory: 10Gi
explainer:
alibi:
type: AnchorImages
storageUri: "gs://seldon-models/tfserving/cifar10/explainer-py36-0.5.2"
config:
batch_size: "40"
stop_on_first: "True"
resources:
requests:
cpu: 0.1
memory: 5Gi
limits:
memory: 10Gi
笔记
推理服务资源描述:
- 存储在Google bucket上的预训练tensorflow模型
- 一个AchorImage Seldon Alibi解说员,请参阅Alibi文档了解更多详细信息。
notebook测试
使用Jupyter notebook运行此示例。
一旦创建,您将能够测试预测:

然后得到一个解释:

示例锚收入预测的表格解释
本示例使用美国收入数据集来显示表格数据的解释示例。您也可以试用Jupyter notebook进行视觉漫游。
使用alibi解释器创建推理服务
我们可以为该数据集创建一个具有经过训练的sklearn预测器和相关模型解释器的推理服务。我们将使用的黑盒解释器算法是Alibi开源库中Anchors的表格版本。有关此算法和可设置的配置设置的更多详细信息,请参阅Seldon Alibi文档。
推理服务如下所示:
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "income"
spec:
predictor:
minReplicas: 1
sklearn:
storageUri: "gs://seldon-models/sklearn/income/model"
resources:
requests:
cpu: 0.1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
explainer:
minReplicas: 1
alibi:
type: AnchorTabular
storageUri: "gs://seldon-models/sklearn/income/explainer-py37-0.6.0"
resources:
requests:
cpu: 0.1
memory: 1Gi
limits:
cpu: 1
memory: 4Gi
使用上述yaml创建推理服务:
kubectl
kubectl create -f income.yaml
第一步是确定入口IP和端口,并设置INGRESS_HOST和INGRESS_PORT
MODEL_NAME=income
SERVICE_HOSTNAME=$(kubectl get inferenceservice income -o jsonpath='{.status.url}' | cut -d "/" -f 3)
运行推理
测试预测器:
curl -H "Host: $SERVICE_HOSTNAME" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict -d '{"instances":[[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]]}'
您应该收到的回复显示预测是针对低工资的:
期望输出
{"predictions": [0]}
运行解释
现在让我们对此进行解释:
curl -H "Host: $SERVICE_HOSTNAME" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:explain -d '{"instances":[[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]]}'
返回的解释如下:
期望输出
{
"names": [
"Marital Status = Never-Married",
"Workclass = State-gov"
],
"precision": 0.9724770642201835,
"coverage": 0.0147,
"raw": {
"feature": [
3,
1
],
"mean": [
0.9129746835443038,
0.9724770642201835
],
"precision": [
0.9129746835443038,
0.9724770642201835
],
"coverage": [
0.3327,
0.0147
],
"examples": [
{
"covered": [
[
30,
"Self-emp-not-inc",
"Bachelors",
"Never-Married",
"Sales",
"Unmarried",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
69,
"Private",
"Dropout",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
9386,
"Capital Loss <= 0.00",
60,
"United-States"
],
[
44,
"Local-gov",
"Bachelors",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
52,
"United-States"
],
[
59,
"Private",
"High School grad",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
50,
"United-States"
],
[
55,
"Private",
"Bachelors",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
15024,
"Capital Loss <= 0.00",
55,
"United-States"
],
[
32,
"?",
"Bachelors",
"Never-Married",
"?",
"Unmarried",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
32,
"United-States"
],
[
47,
"Private",
"Dropout",
"Never-Married",
"Blue-Collar",
"Unmarried",
"Black",
"Female",
6849,
"Capital Loss <= 0.00",
40,
"United-States"
],
[
35,
"Private",
"Associates",
"Never-Married",
"Service",
"Not-in-family",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
65,
"United-States"
],
[
32,
"Private",
"High School grad",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
48,
"Private",
"Masters",
"Never-Married",
"White-Collar",
"Husband",
"Black",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
45,
"United-States"
]
],
"covered_true": [
[
32,
"Private",
"High School grad",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
44,
"Local-gov",
"Bachelors",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
52,
"United-States"
],
[
36,
"Private",
"High School grad",
"Never-Married",
"Blue-Collar",
"Unmarried",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
30,
"United-States"
],
[
56,
"Private",
"High School grad",
"Never-Married",
"Blue-Collar",
"Unmarried",
"Black",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
49,
"Local-gov",
"High School grad",
"Never-Married",
"Service",
"Unmarried",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
30,
"United-States"
],
[
20,
"?",
"High School grad",
"Never-Married",
"?",
"Own-child",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
10,
"United-States"
],
[
22,
"?",
"High School grad",
"Never-Married",
"?",
"Own-child",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
"Hours per week > 45.00",
"United-States"
],
[
29,
"Private",
"High School grad",
"Never-Married",
"Service",
"Own-child",
"Asian-Pac-Islander",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
25,
"SE-Asia"
],
[
45,
"Local-gov",
"Masters",
"Never-Married",
"Professional",
"Unmarried",
"White",
"Female",
1506,
"Capital Loss <= 0.00",
45,
"United-States"
],
[
27,
"Private",
"High School grad",
"Never-Married",
"Blue-Collar",
"Not-in-family",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
50,
"United-States"
]
],
"covered_false": [
[
29,
"Private",
"Bachelors",
"Never-Married",
"Service",
"Husband",
"White",
"Male",
7298,
"Capital Loss <= 0.00",
42,
"United-States"
],
[
56,
"Private",
"Associates",
"Never-Married",
"Sales",
"Husband",
"White",
"Male",
15024,
"Capital Loss <= 0.00",
40,
"United-States"
],
[
47,
"Private",
"Masters",
"Never-Married",
"Sales",
"Not-in-family",
"White",
"Male",
27828,
"Capital Loss <= 0.00",
60,
"United-States"
],
[
40,
"Private",
"Associates",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
7688,
"Capital Loss <= 0.00",
44,
"United-States"
],
[
55,
"Self-emp-not-inc",
"High School grad",
"Never-Married",
"White-Collar",
"Not-in-family",
"White",
"Male",
34095,
"Capital Loss <= 0.00",
60,
"United-States"
],
[
53,
"Private",
"Masters",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
48,
"United-States"
],
[
47,
"Federal-gov",
"Doctorate",
"Never-Married",
"Professional",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
53,
"Private",
"High School grad",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
1977,
40,
"United-States"
],
[
46,
"Private",
"Bachelors",
"Never-Married",
"Sales",
"Not-in-family",
"White",
"Male",
8614,
"Capital Loss <= 0.00",
40,
"United-States"
],
[
44,
"Local-gov",
"Prof-School",
"Never-Married",
"Professional",
"Not-in-family",
"White",
"Male",
10520,
"Capital Loss <= 0.00",
40,
"United-States"
]
],
"uncovered_true": [],
"uncovered_false": []
},
{
"covered": [
[
41,
"State-gov",
"High School grad",
"Never-Married",
"White-Collar",
"Not-in-family",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
64,
"State-gov",
"High School grad",
"Never-Married",
"Blue-Collar",
"Not-in-family",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
33,
"State-gov",
"High School grad",
"Never-Married",
"Service",
"Unmarried",
"Black",
"Female",
1831,
"Capital Loss <= 0.00",
40,
"United-States"
],
[
35,
"State-gov",
"High School grad",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
60,
"United-States"
],
[
25,
"State-gov",
"Dropout",
"Never-Married",
"Blue-Collar",
"Own-child",
"Black",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
40,
"State-gov",
"Associates",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
19,
"State-gov",
"High School grad",
"Never-Married",
"Blue-Collar",
"Other-relative",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
20,
"United-States"
],
[
44,
"State-gov",
"Dropout",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
88,
"United-States"
],
[
80,
"State-gov",
"Associates",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
24,
"United-States"
],
[
21,
"State-gov",
"High School grad",
"Never-Married",
"Professional",
"Own-child",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
20,
"United-States"
]
],
"covered_true": [
[
22,
"State-gov",
"High School grad",
"Never-Married",
"Service",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
25,
"United-States"
],
[
49,
"State-gov",
"High School grad",
"Never-Married",
"Service",
"Not-in-family",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
22,
"State-gov",
"Bachelors",
"Never-Married",
"?",
"Not-in-family",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
25,
"United-States"
],
[
31,
"State-gov",
"Bachelors",
"Never-Married",
"Professional",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
50,
"United-States"
],
[
18,
"State-gov",
"Dropout",
"Never-Married",
"Blue-Collar",
"Not-in-family",
"Black",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
56,
"State-gov",
"High School grad",
"Never-Married",
"Blue-Collar",
"Unmarried",
"Black",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
26,
"State-gov",
"Dropout",
"Never-Married",
"Service",
"Unmarried",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
38,
"State-gov",
"Bachelors",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
52,
"State-gov",
"High School grad",
"Never-Married",
"Blue-Collar",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
70,
"United-States"
],
[
25,
"State-gov",
"Associates",
"Never-Married",
"Professional",
"Wife",
"White",
"Female",
"Capital Gain <= 0.00",
1887,
40,
"United-States"
]
],
"covered_false": [
[
46,
"State-gov",
"Prof-School",
"Never-Married",
"Professional",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
45,
"United-States"
],
[
42,
"State-gov",
"Bachelors",
"Never-Married",
"White-Collar",
"Husband",
"White",
"Male",
15024,
"Capital Loss <= 0.00",
50,
"United-States"
],
[
46,
"State-gov",
"Prof-School",
"Never-Married",
"Professional",
"Husband",
"White",
"Male",
15024,
"Capital Loss <= 0.00",
40,
"United-States"
],
[
54,
"State-gov",
"Doctorate",
"Never-Married",
"White-Collar",
"Not-in-family",
"White",
"Female",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
40,
"United-States"
],
[
42,
"State-gov",
"Masters",
"Never-Married",
"White-Collar",
"Not-in-family",
"White",
"Female",
14084,
"Capital Loss <= 0.00",
60,
"United-States"
],
[
37,
"State-gov",
"Masters",
"Never-Married",
"Professional",
"Husband",
"White",
"Male",
"Capital Gain <= 0.00",
"Capital Loss <= 0.00",
45,
"United-States"
]
],
"uncovered_true": [],
"uncovered_false": []
}
],
"all_precision": 0,
"num_preds": 1000101,
"names": [
"Marital Status = Never-Married",
"Workclass = State-gov"
],
"instance": [
[
39
],
[
7
],
[
"28.00 < Age <= 37.00"
],
[
"28.00 < Age <= 37.00"
],
[
"28.00 < Age <= 37.00"
],
[
"28.00 < Age <= 37.00"
],
[
4
],
[
"28.00 < Age <= 37.00"
],
[
2174
],
[
"Age <= 28.00"
],
[
40
],
[
9
]
],
"prediction": 0
}
}
电影情感的示例锚文本解释
本示例使用电影情感数据集来显示文本数据的解释,要进行更直观的演练,请尝试Jupyter notebook。
使用AnchorText解释器部署推理服务
我们可以为该数据集创建一个具有经过训练的sklearn预测器和相关解释器的推理服务。我们将使用的黑盒解释器算法是Alibi开源库中Anchors的文本版本。有关此算法和可设置的配置设置的更多详细信息,请参阅Seldon Alibi文档。
推理服务如下所示:
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "moviesentiment"
spec:
predictor:
minReplicas: 1
sklearn:
storageUri: "gs://seldon-models/sklearn/moviesentiment"
resources:
requests:
cpu: 0.1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
explainer:
minReplicas: 1
alibi:
type: AnchorText
resources:
requests:
cpu: 0.1
memory: 6Gi
limits:
memory: 6Gi
创建此推理服务:
kubectl
kubectl create -f moviesentiment.yaml
运行推理与解释
为模型名称和集群入口点设置一些环境变量。
MODEL_NAME=moviesentiment
SERVICE_HOSTNAME=$(kubectl get inferenceservice moviesentiment -o jsonpath='{.status.url}' | cut -d "/" -f 3)
在一个例句中测试预测器:
curl -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict -d '{"instances":["a visually flashy but narratively opaque and emotionally vapid exercise ."]}'
您应该收到显示负面情绪的回复:
期望输出
{"predictions": [0]}
测试另一个句子:
curl -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict -d '{"instances":["a touching , sophisticated film that almost seems like a documentary in the way it captures an italian immigrant family on the brink of major changes ."]}'
您应该收到显示积极情绪的回复:
期望输出
{"predictions": [1]}
现在让我们来解释第一句话:
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:explain -d '{"instances":["a visually flashy but narratively opaque and emotionally vapid exercise ."]}'
期望输出
{
"names": [
"exercise"
],
"precision": 1,
"converage": 0.5005,
"raw": {
"feature": [
9
],
"mean": [
1
],
"precision": [
1
],
"coverage": [
0.5005
],
"examples": [
{
"covered": [
[
"a visually UNK UNK UNK opaque and emotionally vapid exercise UNK"
],
[
"a visually flashy but UNK UNK and emotionally UNK exercise ."
],
[
"a visually flashy but narratively UNK UNK UNK UNK exercise ."
],
[
"UNK UNK flashy UNK narratively opaque UNK UNK vapid exercise ."
],
[
"UNK visually UNK UNK UNK UNK and UNK vapid exercise UNK"
],
[
"UNK UNK UNK but UNK opaque UNK emotionally UNK exercise UNK"
],
[
"a UNK flashy UNK UNK UNK and emotionally vapid exercise ."
],
[
"UNK UNK flashy UNK narratively opaque UNK emotionally UNK exercise ."
],
[
"UNK UNK flashy UNK narratively opaque UNK UNK vapid exercise UNK"
],
[
"a visually UNK but narratively opaque UNK UNK vapid exercise UNK"
]
],
"covered_true": [
[
"UNK visually flashy but UNK UNK and emotionally vapid exercise ."
],
[
"UNK visually UNK UNK UNK UNK and UNK UNK exercise ."
],
[
"a UNK UNK UNK narratively opaque UNK UNK UNK exercise UNK"
],
[
"a visually UNK UNK narratively opaque UNK UNK UNK exercise UNK"
],
[
"a UNK UNK UNK UNK UNK and emotionally vapid exercise UNK"
],
[
"a UNK flashy UNK narratively UNK and UNK vapid exercise UNK"
],
[
"UNK visually UNK UNK narratively UNK and emotionally UNK exercise ."
],
[
"UNK visually flashy UNK narratively opaque UNK emotionally UNK exercise UNK"
],
[
"UNK UNK flashy UNK UNK UNK and UNK vapid exercise UNK"
],
[
"a UNK flashy UNK UNK UNK and emotionally vapid exercise ."
]
],
"covered_false": [],
"uncovered_true": [],
"uncovered_false": []
}
],
"all_precision": 0,
"num_preds": 1000101,
"names": [
"exercise"
],
"positions": [
63
],
"instance": "a visually flashy but narratively opaque and emotionally vapid exercise .",
"prediction": 0
}
}
这表明关键字“bad”已被识别,示例在上下文中使用周围单词的默认“UKN”占位符来显示它。
自定义配置
您可以在“config”部分为锚文本解释程序添加自定义配置。例如,我们可以将文本解释器更改为从语料库中采样,而不是使用UKN占位符:
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "moviesentiment"
spec:
predictor:
sklearn:
storageUri: "gs://seldon-models/sklearn/moviesentiment"
resources:
requests:
cpu: 0.1
explainer:
alibi:
type: AnchorText
config:
use_unk: "false"
sample_proba: "0.5"
resources:
requests:
cpu: 0.1
如果我们应用它:
kubectl
kubectl create -f moviesentiment2.yaml
然后要求解释:
curl -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:explain -d '{"instances":["a visually flashy but narratively opaque and emotionally vapid exercise ."]}'
期望输出
{
"names": [
"exercise"
],
"precision": 0.9918032786885246,
"coverage": 0.5072,
"raw": {
"feature": [
9
],
"mean": [
0.9918032786885246
],
"precision": [
0.9918032786885246
],
"coverage": [
0.5072
],
"examples": [
{
"covered": [
[
"each visually playful but enormously opaque and academically vapid exercise ."
],
[
"each academically trashy but narratively pigmented and profoundly vapid exercise ."
],
[
"a masterfully flashy but narratively straightforward and verbally disingenuous exercise ."
],
[
"a visually gaudy but interestingly opaque and emotionally vapid exercise ."
],
[
"some concurrently flashy but philosophically voxel and emotionally vapid exercise ."
],
[
"a visually flashy but delightfully sensible and emotionally snobby exercise ."
],
[
"a surprisingly bland but fantastically seamless and hideously vapid exercise ."
],
[
"both visually classy but nonetheless robust and musically vapid exercise ."
],
[
"a visually fancy but narratively robust and emotionally uninformed exercise ."
],
[
"a visually flashy but tastefully opaque and weirdly vapid exercise ."
]
],
"covered_true": [
[
"another visually flashy but narratively opaque and emotionally vapid exercise ."
],
[
"the visually classy but narratively opaque and emotionally vapid exercise ."
],
[
"the visually arty but overshadow yellowish and emotionally vapid exercise ."
],
[
"a objectively flashy but genuinely straightforward and emotionally vapid exercise ."
],
[
"a visually flashy but tastefully opaque and weirdly vapid exercise ."
],
[
"a emotionally crafty but narratively opaque and emotionally vapid exercise ."
],
[
"some similarly eclectic but narratively dainty and emotionally illogical exercise ."
],
[
"a nicely flashy but psychologically opaque and emotionally vapid exercise ."
],
[
"a visually flashy but narratively colorless and emotionally vapid exercise ."
],
[
"every properly lavish but logistically opaque and someway incomprehensible exercise ."
]
],
"covered_false": [
[
"another enormously inventive but socially opaque and somewhat idiotic exercise ."
],
[
"each visually playful but enormously opaque and academically vapid exercise ."
]
],
"uncovered_true": [],
"uncovered_false": []
}
],
"all_precision": 0,
"num_preds": 1000101,
"names": [
"exercise"
],
"positions": [
63
],
"instance": "a visually flashy but narratively opaque and emotionally vapid exercise .",
"prediction": 0
}
}
在notebook上运行
您也可以在notebook上运行此示例
使用AIX获取MNIST分类的说明
这是如何在KServe上使用AI可解释性360(AIX360)解释模型预测的示例。我们将使用该模型的手写数字的mnist数据集,并解释该模型如何决定预测结果。
使用AIX Explainer创建推理服务
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "aix-explainer"
namespace: default
spec:
predictor:
containers:
- name: predictor
image: aipipeline/rf-predictor:0.4.1
command: ["python", "-m", "rfserver", "--model_name", "aix-explainer"]
imagePullPolicy: Always
explainer:
aix:
type: LimeImages
config:
num_samples: "100"
top_labels: "10"
min_weight: "0.01"
使用v1beta1 API部署推理服务
kubectl
kubectl apply -f aix-explainer.yaml
然后查找url
kubectl
kubectl get inferenceservice aix-explainer
NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
aix-explainer http://aix-explainer.default.example.com True 100 aix-explainer-predictor-default-00001 43m
运行解释
第一步是确定入口IP和端口,并设置INGRESS_HOST和INGRESS_PORT,用于模型训练和解释器客户端的示例代码可以在这里找到。
MODEL_NAME=aix-explainer
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -o jsonpath='{.status.url}' | cut -d "/" -f 3)
python query_explain.py http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/${MODEL_NAME}:explain ${SERVICE_HOSTNAME}
一段时间后,您应该会看到一个包含解释的弹出窗口,类似于下图。本例中使用的LIME方法以红色突出显示得分高于某个置信值的像素,以指示分类。显示的说明将包含一组突出显示的图像,并与描述上下文的标题配对。对于每个标题和图像对,标题将表示Positive for Actual ,表示这是LIME正在测试的分类,是该图像的正确标签。

举个例子,标题为“Positive for 2 Actual 2”的左上角图像是突出显示像素的图像,这些像素的得分高于指定的置信水平,表示分类为2(其中2也是正确的分类)。
类似地,右下角标题为“0实际为正2”的图像是突出显示像素的图像,该像素的得分高于指定的置信水平,表示分类为0(其中2是正确的分类)。如果模型错误地将图像分类为0,那么你可以通过将高亮显示的像素视为特别麻烦来解释原因。通过提高和降低部署yaml中的min_weight参数,您可以测试模型认为哪些像素与每个分类最相关和最不相关。
若要尝试其他MNIST示例,请在查询末尾添加0-10000之间的整数。所选择的整数将是要在MNIST数据集中选择的图像的索引。
python query_explain.py http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/${MODEL_NAME}:explain ${SERVICE_HOSTNAME} 100
要使用explainer尝试不同的参数,请添加另一个字符串json参数来指定参数。支持修改的参数:top_labels、segmentation_alg、num_samples、positive_only和min_weight。
python query_explain.py http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/${MODEL_NAME}:explain ${SERVICE_HOSTNAME} 100 '{"top_labels":"10"}'
停止推理服务
kubectl delete -f aix-explainer.yaml
构建开发AIX模型解释程序Docker Image
如果您想为AIX模型解释程序构建一个开发映像,请按照以下说明进行操作
疑难解答
<504> Gateway Timeout <504> -解释程序可能花费了太长时间,并且没有足够快地发回响应。要么没有分配足够的资源,要么需要减少解释者可以采集的样本数量。要解决此问题,请访问aix-explainer.yaml并增加资源。或者,要减少允许的样本数,请转到aix-explainer.yaml,并向explainer:command:'–num_samples’添加一个标志(默认样本数为1000)
如果您看到配置“aixserver-explainer-default”没有任何现成的修订版,则容器可能需要很长时间才能下载。如果您运行kubectl获取修订版,发现您的修订版卡在ContainerCreating中,请尝试删除推理服务并重新部署。
2979

被折叠的 条评论
为什么被折叠?



