-
什么是pmml
- predictive model markup language 预测模型标记语言
- 1997年7月提出
- xml格式
- 通用性(跨平台)、规范性(规范化模型描述语言)、异构性(xml本身的异构性)、独立性(独立于数据挖掘工具和)、易用性(编辑xml文档)
-
fit / transform / fit_transform的区别
- fit:从数据中生成参数
- tranform:根据fit生成的参数,应用到数据集中,并转换
- fit_transform:fit 和 transform的结合
- 不能直接对测试数据集按公式进行归一化,而是要使用训练数据集的均值和方差对测试数据集归一化,见下图
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
iris = load_iris()
X=iris.data
Y=iris.target
xtrain,xtest,ytrain,ytest=train_test_split(X,Y,test_size=0.3)
#
ss=StandardScaler()
##1 在同一个数据集上,比对直接fit_transform 和 fit+transform 的结果
ss_fit=ss.fit(xtrain) # 先fit
result1=ss_fit.transform(xtrain) #然后transform
result2=ss.fit_transform(xtrain) # 一起fit,transform
print(result1==result2) # 显示相等
##2 在一个数据集上fit,在另外一个数据集上transform,比对直接fit_transform 和 fit+transform 的结果
ss_fit=ss.fit(xtrain) # 先fit
result1=ss_fit.transform(xtest) #然后transform
result2=ss.fit_transform(xtest) # 一起fit,transform
print(result1==result2) # 显示不相等
-
pipline
- 顾名思义,管道,就是把各种transfrom的操作 加上 estimator 有序的组合在一起
- 最后一个必须为 estimator
- 作用: 对于一个模型来说,如果要比对不同参数之间的区别,那么就比较方便简化很多代码,比如stackoverflow里面一个说明例子,按正常的流程,我们是按照如下方式做的
vect = CountVectorizer()
tfidf = TfidfTransformer()
clf = SGDClassifier()
vX = vect.fit_transform(Xtrain)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
# Now evaluate all steps on test set
vX = vect.fit_transform(Xtest)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
4.使用了pipline之后,那么我们需要更少的代码,说白了,就是把一些通用的流程给封装好
pipeline = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier()),
])
predicted = pipeline.fit(Xtrain).predict(Xtrain)
# Now evaluate all steps on test set
predicted = pipeline.predict(Xtest)
- steps:是一个列表,列表的元素为tuple,tuple的第一个值是tranform的自定义的别名,第二个值是tranform的名字,例如 Pipeline([('anova', anova_filter), ('svc', clf)]
- fit:基于前面的transform后的数据集,用最后一个estimator在该数据集上做fit(拟合)
- fit_predict:基于前面的transform后的数据集,用最后一个estimator在该数据集上做fit 和 predict ;比如在训练集上,那么就得到训练集上的预测结果
- fit_transform: 基于前面的transform后的数据集,用最后一个estimator在该数据集上做fit 和 transform
- get_params: 获取estimator的参数
- predict:基于transform后的数据集,做预测
- predict_log_proba:基于transform后的数据集,estimator估计结果的对数概率值
- predict_proba:基于transform后的数据集,estimator估计结果的概率值
- score:基于transform后的数据集,estimator估计结果的得分
- score_samples:部分样本的得分
- set_params:对estimator设置参数
- 一个栗子
# SelectKBest + svm 组成pipline
from sklearn import svm
from sklearn.datasets import make_classification
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
from sklearn.pipeline import Pipeline
# generate some data to play with
X, y = make_classification(n_informative=5, n_redundant=0, random_state=42)
# ANOVA SVM-C
anova_filter = SelectKBest(f_regression, k=5)
clf = svm.SVC(kernel='linear')
anova_svm = Pipeline(steps=[('anova', anova_filter), ('svc', clf)])
# You can set the parameters using the names issued
# For instance, fit using a k of 10 in the SelectKBest
# and a parameter 'C' of the svm
anova_svm.set_params(anova__k=10, svc__C=.1).fit(X, y)
prediction = anova_svm.predict(X)
print(prediction)
print(anova_svm.score(X,y))
-
如何生成pmml文件? 通过 nyoka模块 + pipline
生成xgboost的pmml文件
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
import pandas as pd
from xgboost import XGBClassifier
from nyoka import xgboost_to_pmml
seed = 123456
iris = datasets.load_iris()
target = 'Species'
features = iris.feature_names
iris_df = pd.DataFrame(iris.data, columns=features)
iris_df[target] = iris.target
X, y = iris_df[features], iris_df[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed)
pipeline = Pipeline([
('scaling', StandardScaler()),
('xgb', XGBClassifier(n_estimators=5, seed=seed))
])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
y_pred_proba = pipeline.predict_proba(X_test)
xgboost_to_pmml(pipeline, features, target, "/Users/hqh/pycharm/pmml/xgb-iris.pmml")
生成svm的pmml文件
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from nyoka import skl_to_pmml
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data,columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
pipeline_obj = Pipeline([
('scaler', StandardScaler()),
('svm',SVC())
])
pipeline_obj.set_params(svm__C=.1)
pipeline_obj.fit(irisd[features],irisd[target])
skl_to_pmml(pipeline_obj,features,target,"svc_pmml.pmml")
生成Isolation Forest的pmml文件
from sklearn.ensemble import IsolationForest
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from nyoka import skl_to_pmml
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data,columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
iforest = IsolationForest(n_estimators=40, max_samples=3000, contamination=0, random_state=np.random.RandomState(42))
model_type="iforest"
pipeline = Pipeline([
(model_type, iforest)
])
pipeline.fit(iris.data)
skl_to_pmml(pipeline, features, "","forest.pmml")
-
利用pmml文件进行预测
from pypmml import Model
model = Model.fromFile('/Users/hqh/pycharm/pmml/forest.pmml')
result = model.predict({'sepal length (cm)':1,
"sepal width (cm)":1,"petal length (cm)":1,"petal width (cm)":1})
print(result)
'''
{'outlier': True, 'anomalyScore': 0.625736561904991}
'''