main concepts
dataframe:存放比如text,feature vectors,true labels,和predictions
transformer: 转换为新的df, 如ML model是将特征dataframe转化为prediction dataframe的算法
estimator: be fit on dataframe,生成transformer, 如learning algorithm就是训练dataframe并产生model的estimator
pipeline: 将多重transformers和estimators链接成一个ML workflow
parameter: 所有transformers和estimators对特定参数共享一个API
pipeline components
transformers
1 transformer = feature transformer + learned models
2 implements method transform()
3 feature transformer: 修改或新增列,生成新的transformer
learning model: 为特征向量预测label,生成预测label列
estimators
1 estimator 适应数据或者训练数据的算法
2 implements method fit(),接收dataframe,产生model,model就是一个transformer
3 logsticregression是estimator, 训练一个logsticregressionmodel, 就是一个transformer
A workflow consists of a sequence of pipelinestages( transformers and estimators)
dag pipelines: stages must be specified in topoligical order instead of linear order
runtime checking: pipeline操作多种类型dataframe, 无法使用complie-time type checking, 在真正运行pipeline之前,pipeline和pipelinemodel依据schema执行runtime checking
unique pipeline stages: stage have unique id, and same stage should not be inserted twice
parameters
1 param: 自包含文档的命名参数,parammap (parameter, value)
2 parameters belong to specific instances of estimators and transformers
persistence:
1 save a model or a pipeline to disk for later use
2 向后兼容性,spark会尽量保持前一版本的持久化model/pipeline