hadoop实例sort

参考文献:http://www.hadooper.cn/dct/page/65777

1排序实例

排序实例仅仅用 map/reduce框架来把输入目录排序放到输出目录。 输入和输出必须是顺序文件,键和值是BytesWritable.
mapper是预先定义的IdentityMapper,reducer 是预先定义的 IdentityReducer, 两个都是把输入直接的输出。
要运行这个例 子:bin/hadoop jar hadoop-*-examples.jar sort [-m <#maps>] [-r <#reduces>] <in-dir> <out-dir>

2运行排序基准测试

为了使得排序例子作为一个 基准测试,用 RandomWriter产 生10GB/node 的数据。然后用排序实例来进行排序。这个提供了一个可扩展性依赖于集群的大小的排序基准。默认情况下,排序实例用1.0*capacity作为 reduces的数量,依赖于你的集群的大小你可能会在1.75*capacity的情况下得到更好的结果。

To use the sort example as a benchmark, generate 10GB/node of random data using RandomWriter. Then sort the data using the sort example. This provides a sort benchmark that scales depending on the size of the cluster. By default, the sort example uses 1.0 * capacity for the number of reduces and depending on your cluster you may see better results at 1.75 * capacity.

命令是:

% bin/hadoop jar hadoop-*-examples.jar randomwriter rand
% bin/hadoop jar hadoop-*-examples.jar sort rand rand-sort

第一个命令会在rand 目录的生成没有排序的数据。第二个命令会读数据,排序,然后写入rand-sort 目录

排序支持一般的选项:参见DevelopmentCommandLineOptions

3具体实验

3.1代码实例Sort.java

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.examples;

import java.io.IOException;
import java.net.URI;
import java.util.*;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.mapred.lib.IdentityMapper;
import org.apache.hadoop.mapred.lib.IdentityReducer;
import org.apache.hadoop.mapred.lib.InputSampler;
import org.apache.hadoop.mapred.lib.TotalOrderPartitioner;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

/**
 * This is the trivial map/reduce program that does absolutely nothing
 * other than use the framework to fragment and sort the input values.
 *
 * To run: bin/hadoop jar build/hadoop-examples.jar sort
 *            [-m <i>maps</i>] [-r <i>reduces</i>]
 *            [-inFormat <i>input format class</i>] 
 *            [-outFormat <i>output format class</i>] 
 *            [-outKey <i>output key class</i>] 
 *            [-outValue <i>output value class</i>] 
 *            [-totalOrder <i>pcnt</i> <i>num samples</i> <i>max splits</i>]
 *            <i>in-dir</i> <i>out-dir</i> 
 */
public class Sort<K,V> extends Configured implements Tool {
  private RunningJob jobResult = null;

  static int printUsage() {
    System.out.println("sort [-m <maps>] [-r <reduces>] " +
                       "[-inFormat <input format class>] " +
                       "[-outFormat <output format class>] " + 
                       "[-outKey <output key class>] " +
                       "[-outValue <output value class>] " +
                       "[-totalOrder <pcnt> <num samples> <max splits>] " +
                       "<input> <output>");
    ToolRunner.printGenericCommandUsage(System.out);
    return -1;
  }

  /**
   * The main driver for sort program.
   * Invoke this method to submit the map/reduce job.
   * @throws IOException When there is communication problems with the 
   *                     job tracker.
   */
  public int run(String[] args) throws Exception {

    JobConf jobConf = new JobConf(getConf(), Sort.class);
    jobConf.setJobName("sorter");

    jobConf.setMapperClass(IdentityMapper.class);        
    jobConf.setReducerClass(IdentityReducer.class);

    JobClient client = new JobClient(jobConf);
    ClusterStatus cluster = client.getClusterStatus();
    int num_reduces = (int) (cluster.getMaxReduceTasks() * 0.9);
    String sort_reduces = jobConf.get("test.sort.reduces_per_host");
    if (sort_reduces != null) {
       num_reduces = cluster.getTaskTrackers() * 
                       Integer.parseInt(sort_reduces);
    }
    Class<? extends InputFormat> inputFormatClass = 
      SequenceFileInputFormat.class;
    Class<? extends OutputFormat> outputFormatClass = 
      SequenceFileOutputFormat.class;
    Class<? extends WritableComparable> outputKeyClass = BytesWritable.class;
    Class<? extends Writable> outputValueClass = BytesWritable.class;
    List<String> otherArgs = new ArrayList<String>();
    InputSampler.Sampler<K,V> sampler = null;
    for(int i=0; i < args.length; ++i) {
      try {
        if ("-m".equals(args[i])) {
          jobConf.setNumMapTasks(Integer.parseInt(args[++i]));
        } else if ("-r".equals(args[i])) {
          num_reduces = Integer.parseInt(args[++i]);
        } else if ("-inFormat".equals(args[i])) {
          inputFormatClass = 
            Class.forName(args[++i]).asSubclass(InputFormat.class);
        } else if ("-outFormat".equals(args[i])) {
          outputFormatClass = 
            Class.forName(args[++i]).asSubclass(OutputFormat.class);
        } else if ("-outKey".equals(args[i])) {
          outputKeyClass = 
            Class.forName(args[++i]).asSubclass(WritableComparable.class);
        } else if ("-outValue".equals(args[i])) {
          outputValueClass = 
            Class.forName(args[++i]).asSubclass(Writable.class);
        } else if ("-totalOrder".equals(args[i])) {
          double pcnt = Double.parseDouble(args[++i]);
          int numSamples = Integer.parseInt(args[++i]);
          int maxSplits = Integer.parseInt(args[++i]);
          if (0 >= maxSplits) maxSplits = Integer.MAX_VALUE;
          sampler =
            new InputSampler.RandomSampler<K,V>(pcnt, numSamples, maxSplits);
        } else {
          otherArgs.add(args[i]);
        }
      } catch (NumberFormatException except) {
        System.out.println("ERROR: Integer expected instead of " + args[i]);
        return printUsage();
      } catch (ArrayIndexOutOfBoundsException except) {
        System.out.println("ERROR: Required parameter missing from " +
            args[i-1]);
        return printUsage(); // exits
      }
    }

    // Set user-supplied (possibly default) job configs
    jobConf.setNumReduceTasks(num_reduces);

    jobConf.setInputFormat(inputFormatClass);
    jobConf.setOutputFormat(outputFormatClass);

    jobConf.setOutputKeyClass(outputKeyClass);
    jobConf.setOutputValueClass(outputValueClass);

    // Make sure there are exactly 2 parameters left.
    if (otherArgs.size() != 2) {
      System.out.println("ERROR: Wrong number of parameters: " +
          otherArgs.size() + " instead of 2.");
      return printUsage();
    }
    FileInputFormat.setInputPaths(jobConf, otherArgs.get(0));
    FileOutputFormat.setOutputPath(jobConf, new Path(otherArgs.get(1)));

    if (sampler != null) {
      System.out.println("Sampling input to effect total-order sort...");
      jobConf.setPartitionerClass(TotalOrderPartitioner.class);
      Path inputDir = FileInputFormat.getInputPaths(jobConf)[0];
      inputDir = inputDir.makeQualified(inputDir.getFileSystem(jobConf));
      Path partitionFile = new Path(inputDir, "_sortPartitioning");
      TotalOrderPartitioner.setPartitionFile(jobConf, partitionFile);
      InputSampler.<K,V>writePartitionFile(jobConf, sampler);
      URI partitionUri = new URI(partitionFile.toString() +
                                 "#" + "_sortPartitioning");
      DistributedCache.addCacheFile(partitionUri, jobConf);
      DistributedCache.createSymlink(jobConf);
    }

    System.out.println("Running on " +
        cluster.getTaskTrackers() +
        " nodes to sort from " + 
        FileInputFormat.getInputPaths(jobConf)[0] + " into " +
        FileOutputFormat.getOutputPath(jobConf) +
        " with " + num_reduces + " reduces.");
    Date startTime = new Date();
    System.out.println("Job started: " + startTime);
    jobResult = JobClient.runJob(jobConf);
    Date end_time = new Date();
    System.out.println("Job ended: " + end_time);
    System.out.println("The job took " + 
        (end_time.getTime() - startTime.getTime()) /1000 + " seconds.");
    return 0;
  }

//input attr:/home/hadoop/rand/part-00000 /home/hadoop/rand-sort

  public static void main(String[] args) throws Exception {
    int res = ToolRunner.run(new Configuration(), new Sort(), args);
    System.exit(res);
  }

  /**
   * Get the last job that was run using this instance.
   * @return the results of the last job that was run
   */
  public RunningJob getResult() {
    return jobResult;
  }
}

3.2在eclipse中设置参数:

/home/hadoop/rand/part-00000 /home/hadoop/rand-sort

其中/home/hadoop/rand/part-00000 表示输入路径,/home/hadoop/rand-sort表示输出路径

3.3数据来源

我们这里输入参数中的“/home/hadoop/rand/part-00000”是通过hadoop实例 RandomWriter 这个实例得到的。为了节省时间,hadoop实例 RandomWriter 中得到了两个文件,我们这里指使用了一个文件part-00000。如果要对两个文件都进行排序操作,那么输入路径只需要是目录即可。

4总结

本程序目前我测试只能在单机上执行,不能在集群上运行。即指可以run as ->java application,而不能run on hadoop,具体原因还没有找到,如果发现后续会更新本博客。

PS:2011-10-18

运行环境

1. run as java application

console中会输出信息

Running on 1 nodes to sort from hdfs://master:9000/home/hadoop/rand/part-00000 into hdfs://master:9000/home/hadoop/rand-sort with 1 reduces.

2.一个master和一个slave,run on hadoop

console输出信息

Running on 1 nodes to sort from hdfs://master:9000/home/hadoop/rand/part-00000 into hdfs://master:9000/home/hadoop/rand-sort with 1 reduces.
跟第一中情况一样。

3.一台主机即做master又做slave,另外一台单独做slave

console输出信息

11/10/18 09:24:35 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Running on 2 nodes to sort from hdfs://master:9000/home/hadoop/rand/part-00000 into hdfs://master:9000/home/hadoop/rand-sort with 3 reduces.
Job started: Tue Oct 18 09:24:35 CST 2011
11/10/18 09:24:35 INFO mapred.FileInputFormat: Total input paths to process : 1
11/10/18 09:24:36 INFO mapred.JobClient: Running job: job_201110180923_0001
11/10/18 09:24:37 INFO mapred.JobClient:  map 0% reduce 0%
11/10/18 09:24:50 INFO mapred.JobClient:  map 6% reduce 0%
11/10/18 09:24:51 INFO mapred.JobClient:  map 18% reduce 0%
11/10/18 09:24:53 INFO mapred.JobClient:  map 25% reduce 0%
11/10/18 09:24:56 INFO mapred.JobClient:  map 31% reduce 0%
11/10/18 09:25:01 INFO mapred.JobClient:  map 43% reduce 0%
11/10/18 09:25:02 INFO mapred.JobClient:  map 49% reduce 0%
11/10/18 09:25:04 INFO mapred.JobClient:  map 50% reduce 2%
11/10/18 09:25:08 INFO mapred.JobClient:  map 56% reduce 4%
11/10/18 09:25:09 INFO mapred.JobClient:  map 62% reduce 6%
11/10/18 09:25:11 INFO mapred.JobClient:  map 68% reduce 8%
11/10/18 09:25:12 INFO mapred.JobClient:  map 75% reduce 8%
11/10/18 09:25:14 INFO mapred.JobClient:  map 81% reduce 9%
11/10/18 09:25:20 INFO mapred.JobClient:  map 87% reduce 9%
11/10/18 09:25:23 INFO mapred.JobClient:  map 93% reduce 12%
11/10/18 09:25:26 INFO mapred.JobClient:  map 93% reduce 13%
11/10/18 09:25:27 INFO mapred.JobClient:  map 100% reduce 14%
11/10/18 09:25:29 INFO mapred.JobClient:  map 100% reduce 15%
11/10/18 09:25:35 INFO mapred.JobClient:  map 100% reduce 16%
11/10/18 09:25:36 INFO mapred.JobClient:  map 100% reduce 17%
11/10/18 09:27:49 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000000_0, Status : FAILED
Too many fetch-failures
11/10/18 09:27:49 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:27:49 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:28:05 INFO mapred.JobClient:  map 100% reduce 18%
11/10/18 09:32:51 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000003_0, Status : FAILED
Too many fetch-failures
11/10/18 09:32:55 INFO mapred.JobClient:  map 93% reduce 18%
11/10/18 09:32:58 INFO mapred.JobClient:  map 100% reduce 18%
11/10/18 09:33:04 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000001_0, Status : FAILED
Too many fetch-failures
11/10/18 09:33:04 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:33:04 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:33:11 INFO mapred.JobClient:  map 100% reduce 19%
11/10/18 09:33:20 INFO mapred.JobClient:  map 100% reduce 20%
11/10/18 09:38:19 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000005_0, Status : FAILED
Too many fetch-failures
11/10/18 09:38:19 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:38:19 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:38:23 INFO mapred.JobClient:  map 93% reduce 20%
11/10/18 09:38:26 INFO mapred.JobClient:  map 100% reduce 20%
11/10/18 09:38:35 INFO mapred.JobClient:  map 100% reduce 21%
11/10/18 09:38:41 INFO mapred.JobClient:  map 100% reduce 22%
11/10/18 09:43:10 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000002_0, Status : FAILED
Too many fetch-failures
11/10/18 09:43:14 INFO mapred.JobClient:  map 93% reduce 22%
11/10/18 09:43:17 INFO mapred.JobClient:  map 100% reduce 22%
11/10/18 09:43:35 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000006_0, Status : FAILED
Too many fetch-failures
11/10/18 09:43:35 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:43:35 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:43:51 INFO mapred.JobClient:  map 100% reduce 24%
11/10/18 09:48:50 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000009_0, Status : FAILED
Too many fetch-failures
11/10/18 09:48:50 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:48:50 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:49:06 INFO mapred.JobClient:  map 100% reduce 25%
11/10/18 09:53:21 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000004_0, Status : FAILED
Too many fetch-failures
11/10/18 09:53:25 INFO mapred.JobClient:  map 93% reduce 25%
11/10/18 09:53:28 INFO mapred.JobClient:  map 100% reduce 25%
11/10/18 09:53:37 INFO mapred.JobClient:  map 100% reduce 26%
11/10/18 09:54:05 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000011_0, Status : FAILED
Too many fetch-failures
11/10/18 09:54:05 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:54:05 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:54:21 INFO mapred.JobClient:  map 100% reduce 27%
11/10/18 09:59:20 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000015_0, Status : FAILED
Too many fetch-failures
11/10/18 09:59:20 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:59:20 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 09:59:36 INFO mapred.JobClient:  map 100% reduce 52%
11/10/18 09:59:42 INFO mapred.JobClient:  map 100% reduce 53%
11/10/18 09:59:45 INFO mapred.JobClient:  map 100% reduce 54%
11/10/18 09:59:48 INFO mapred.JobClient:  map 100% reduce 55%
11/10/18 09:59:51 INFO mapred.JobClient:  map 100% reduce 56%
11/10/18 09:59:54 INFO mapred.JobClient:  map 100% reduce 57%
11/10/18 09:59:57 INFO mapred.JobClient:  map 100% reduce 58%
11/10/18 10:00:00 INFO mapred.JobClient:  map 100% reduce 60%
11/10/18 10:00:03 INFO mapred.JobClient:  map 100% reduce 61%
11/10/18 10:00:06 INFO mapred.JobClient:  map 100% reduce 62%
11/10/18 10:00:09 INFO mapred.JobClient:  map 100% reduce 63%
11/10/18 10:00:12 INFO mapred.JobClient:  map 100% reduce 64%
11/10/18 10:00:15 INFO mapred.JobClient:  map 100% reduce 65%
11/10/18 10:00:18 INFO mapred.JobClient:  map 100% reduce 66%
11/10/18 10:00:21 INFO mapred.JobClient:  map 100% reduce 67%
11/10/18 10:00:24 INFO mapred.JobClient:  map 100% reduce 68%
11/10/18 10:00:27 INFO mapred.JobClient:  map 100% reduce 69%
11/10/18 10:00:30 INFO mapred.JobClient:  map 100% reduce 70%
11/10/18 10:00:33 INFO mapred.JobClient:  map 100% reduce 71%
11/10/18 10:00:36 INFO mapred.JobClient:  map 100% reduce 72%
11/10/18 10:00:39 INFO mapred.JobClient:  map 100% reduce 73%
11/10/18 10:00:52 INFO mapred.JobClient:  map 100% reduce 75%
11/10/18 10:03:41 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000007_0, Status : FAILED
Too many fetch-failures
11/10/18 10:03:45 INFO mapred.JobClient:  map 93% reduce 75%
11/10/18 10:03:48 INFO mapred.JobClient:  map 100% reduce 75%
11/10/18 10:08:34 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000003_1, Status : FAILED
Too many fetch-failures
11/10/18 10:08:34 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:08:34 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:08:50 INFO mapred.JobClient:  map 100% reduce 76%
11/10/18 10:13:53 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000008_0, Status : FAILED
Too many fetch-failures
11/10/18 10:13:57 INFO mapred.JobClient:  map 93% reduce 76%
11/10/18 10:14:00 INFO mapred.JobClient:  map 100% reduce 76%
11/10/18 10:18:49 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000002_1, Status : FAILED
Too many fetch-failures
11/10/18 10:18:49 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:18:49 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:19:05 INFO mapred.JobClient:  map 100% reduce 77%
11/10/18 10:24:09 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000010_0, Status : FAILED
Too many fetch-failures
11/10/18 10:24:13 INFO mapred.JobClient:  map 93% reduce 77%
11/10/18 10:24:16 INFO mapred.JobClient:  map 100% reduce 77%
11/10/18 10:29:04 INFO mapred.JobClient: Task Id : attempt_201110180923_0001_m_000004_1, Status : FAILED
Too many fetch-failures
11/10/18 10:29:04 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:29:04 WARN mapred.JobClient: Error reading task outputxuwei-laptop
11/10/18 10:29:20 INFO mapred.JobClient:  map 100% reduce 89%
11/10/18 10:29:23 INFO mapred.JobClient:  map 100% reduce 91%
11/10/18 10:29:26 INFO mapred.JobClient:  map 100% reduce 92%
11/10/18 10:29:29 INFO mapred.JobClient:  map 100% reduce 93%
11/10/18 10:29:32 INFO mapred.JobClient:  map 100% reduce 94%
11/10/18 10:29:35 INFO mapred.JobClient:  map 100% reduce 95%
11/10/18 10:29:38 INFO mapred.JobClient:  map 100% reduce 96%
11/10/18 10:29:41 INFO mapred.JobClient:  map 100% reduce 97%
11/10/18 10:29:44 INFO mapred.JobClient:  map 100% reduce 98%
11/10/18 10:29:50 INFO mapred.JobClient:  map 100% reduce 100%
11/10/18 10:29:52 INFO mapred.JobClient: Job complete: job_201110180923_0001
11/10/18 10:29:52 INFO mapred.JobClient: Counters: 18
11/10/18 10:29:52 INFO mapred.JobClient:   Job Counters 
11/10/18 10:29:52 INFO mapred.JobClient:     Launched reduce tasks=4
11/10/18 10:29:52 INFO mapred.JobClient:     Launched map tasks=32
11/10/18 10:29:52 INFO mapred.JobClient:     Data-local map tasks=32
11/10/18 10:29:52 INFO mapred.JobClient:   FileSystemCounters
11/10/18 10:29:52 INFO mapred.JobClient:     FILE_BYTES_READ=1075141899
11/10/18 10:29:52 INFO mapred.JobClient:     HDFS_BYTES_READ=1077495458
11/10/18 10:29:52 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2150285276
11/10/18 10:29:52 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1077290017
11/10/18 10:29:52 INFO mapred.JobClient:   Map-Reduce Framework
11/10/18 10:29:52 INFO mapred.JobClient:     Reduce input groups=102334
11/10/18 10:29:52 INFO mapred.JobClient:     Combine output records=0
11/10/18 10:29:52 INFO mapred.JobClient:     Map input records=102334
11/10/18 10:29:52 INFO mapred.JobClient:     Reduce shuffle bytes=1031027235
11/10/18 10:29:52 INFO mapred.JobClient:     Reduce output records=102334
11/10/18 10:29:52 INFO mapred.JobClient:     Spilled Records=204668
11/10/18 10:29:52 INFO mapred.JobClient:     Map output bytes=1074566657
11/10/18 10:29:52 INFO mapred.JobClient:     Map input bytes=1077289249
11/10/18 10:29:52 INFO mapred.JobClient:     Combine input records=0
11/10/18 10:29:52 INFO mapred.JobClient:     Map output records=102334
11/10/18 10:29:52 INFO mapred.JobClient:     Reduce input records=102334
Job ended: Tue Oct 18 10:29:52 CST 2011
The job took 3916 seconds.





评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值