mahout SparseVectorsFromSequenceFiles详解(2)

文档处理

DocumentProcessor类处理sequencefile

创建输出Path

Path tokenizedPath = new Path(outputDir, DocumentProcessor.TOKENIZED_DOCUMENT_OUTPUT_FOLDER);

这个Path是hadoop的函数,前面的参数是parent,后面的参数是child,将他们组合在一起并规范化(替换\为/,去掉最后一个/)

调用DocumentProcessor.tokenizeDocuments处理输入文档

DocumentProcessor.tokenizeDocuments(inputDir, analyzerClass, tokenizedPath, conf);

这里说下analyzerClass,其它参数都很好理解,查看代码:

Class<? extends Analyzer> analyzerClass = DefaultAnalyzer.class;

查看同目录下的DefaultAnalyzer.java

private final StandardAnalyzer stdAnalyzer = new StandardAnalyzer(Version.LUCENE_31);

这个analyzer是可以修改的,通过参数-a修改即可

DocumentProcessor细节

    Configuration conf = new Configuration(baseConf);
    // this conf parameter needs to be set enable serialisation of conf values
    conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization,"
                                  + "org.apache.hadoop.io.serializer.WritableSerialization");
    conf.set(ANALYZER_CLASS, analyzerClass.getName());

    Job job = new Job(conf);
    job.setJobName("DocumentProcessor::DocumentTokenizer: input-folder: " + input);
    job.setJarByClass(DocumentProcessor.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(StringTuple.class);
    FileInputFormat.setInputPaths(job, input);
    FileOutputFormat.setOutputPath(job, output);

    job.setMapperClass(SequenceFileTokenizerMapper.class);
    job.setInputFormatClass(SequenceFileInputFormat.class);
    job.setNumReduceTasks(0);
    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    HadoopUtil.delete(conf, output);

    job.waitForCompletion(true);
这是一个hadoop程序,mapper是SequenceFileTokenizerMapper,而reducer没有设置

去看看SequenceFileTokenizerMapper都做了哪些工作

  @Override
  protected void map(Text key, Text value, Context context) throws IOException, InterruptedException {
    TokenStream stream = analyzer.reusableTokenStream(key.toString(), new StringReader(value.toString()));
    CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class);
    StringTuple document = new StringTuple();
    stream.reset();
    while (stream.incrementToken()) {
      if (termAtt.length() > 0) {
        document.add(new String(termAtt.buffer(), 0, termAtt.length()));
      }
    }
    context.write(key, document);
  }

很简单,就是把value给tokenize为多个token放置到StringTuple里边,上面reusableTokenStream的定义是

reusableTokenStream(final String fieldName, final Reader reader)

所以key只是做为field没有参与到tokenize的过程

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值