xgboost pipeline

记录:

 

from sklearn2pmml.pipeline import PMMLPipeline

other_params = {'learning_rate': 0.1,
    'seed': 0,
    'gamma': 0.05,
    'n_estimators': 15,
    'min_child_weight': 1.4,
    'colsample_bytree': 0.7,
    'max_depth': 2,
    'subsample': 0.8,
    'reg_alpha': 0.98,
    'reg_lambda':0.95}


pipeline_obj = PMMLPipeline([
    ('Xgbc', xgboost.XGBClassifier(**other_params))
])

pipeline_obj.fit(train_df[features],train_df[target])

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Pipeline in Java refers to the concept of creating a sequence of operations that can be executed one after another to process data. It's commonly used in functional programming and Apache Beam, a distributed processing framework, where it enables developers to define complex data processing pipelines as a series of stages. Here's a simple example of how you might use a pipeline in Java: ```java import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.io.TextIO; import org.apache.beam.sdk.options.PipelineOptionsFactory; import org.apache.beam.sdk.transforms.DoFn; import org.apache.beam.sdk.transforms.ParDo; public class WordCount { public static void main(String[] args) { // Create a new Pipeline options instance PipelineOptions options = PipelineOptionsFactory.create(); // Create a new Pipeline with the given options Pipeline pipeline = Pipeline.create(options); // Read input text from a file String input = "input.txt"; PCollection<String> lines = pipeline.read(TextIO.from(input)); // Apply a ParDo transform to tokenize the words PCollection<String> words = lines.apply(ParDo.of(new TokenizerFn())); // Count the occurrences of each word PCollection<KV<String, Long>> counts = words.apply(Count.perElement()); // Write the result to an output file String output = "output.txt"; counts.writeTo(TextIO.to(output)); // Run the pipeline pipeline.run().waitUntilFinish(); } } // Custom DoFn for tokenizing words class TokenizerFn extends DoFn<String, String> { @ProcessElement public void process(@Element String line, OutputReceiver<String> receiver) { String[] tokens = line.split("\\s+"); for (String token : tokens) { receiver.output(token); } } } ``` In this example, a `Pipeline` is created, which reads text from a file, tokenizes the words using a custom `TokenizerFn`, counts the occurrences of each word, and finally writes the results to an output file.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值