hadoop学习笔记2


学习笔记1 顺利能执行计算 两个txt文件里面的word


1 这里学习用eclipse进行测试,  eclipse安装的是 eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz 在linux系统中 官网下载的

2 插件的话 用的是 hadoop2x-eclipse-plugin-master.zip 官网下载的

3 安装完毕后,进入mapReduce试图


4 然后将marReduce工具栏拉出来

5  然后在LocationLi右键 new一个出来 填上配置 这里的mouap-pc 是本机的电脑名称 也是 etc/hosts里面配置文件的 名称 



9001 和 9000 在 hadoop1 中 是存放在 conf/hadoop-site.xml 中配置的

在hadoop2中 放在 /home/mouap/hadoop/etc/hadoop 路径下 core-site.xml 配置中   但是上图中9001 不知道是干什么的???? 暂时未明白


6 然后在左侧 点击test加载 dfs中的文件,就能看到之前上次的2个text文件


7 然后准备用eclipse写文件

首先要删除 out目录 不然执行会出错,可以在elicpse上右键删除 也可以用命令行删除

bin/hadoop fs -rmr out 

然后查看是否删除  bin/hadoop fs -ls  或者  刷新eclipse中 user,能发下 user/root下没有文件夹了


8 然后new一个 MapReduce的项目 ,然后new 一个java的类

然后导入jar包 包路径为 /usr/app/hadoop-eclipse-plugin/build/contrib/eclipse-plugin/lib 



import java.io.*;
import java.util.*;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.examples.WordCount;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.Tool;


public class WorldCountNew extends Configured implements Tool{

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
System.out.println("key=" +key.toString());
System.out.println("Value=" + value.toString());

String line = value.toString();

StringTokenizer Str = new StringTokenizer(line);
while (Str.hasMoreTokens()) {
word.set(Str.nextToken());
context.write(word, one);
}
}
}


public static class Reduce extends Reducer<Text,IntWritable,Text,IntWritable> {

private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values,Context context) 
throws IOException, InterruptedException {

int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();

System.setProperty("hadoop.home.dir", "c:/home/mouap/hadoop");

System.out.println("url:" + conf.get("fs.default.name"));

String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}

Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);

job.setMapperClass(Map.class);
job.setReducerClass(Reducer.class);


job.setCombinerClass(Reducer.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);


FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

System.exit(job.waitForCompletion(true) ? 0 : 1);
}

@Override
public int run(String[] args) throws Exception {
return 0;
}

}

以上为java类的代码


9 因为死机了, 然后开机后,执行启动服务会导致有些服务因为nameNode和缓存相同冲突

所以需要, 

>1 将/home/mouap/hadoop/tmp目录下所有文件清除

>2 然后启动所有服务 ./start.all/sh

>3 然后重新将 两个text文件放入 fds\

>4 然后就在eclipse里面刷新 大象图标 然后看看里面是否存在 路径和文件


10 下面用执行run 选择 在hadoop上面跑 但是报错

Picked up _JAVA_OPTIONS:   -Dawt.useSystemAAFontSettings=gasp
2015-05-01 11:55:02,759 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - fs.default.name is deprecated. Instead, use fs.defaultFS
url:file:///
Usage: wordcount <in> <out>

原来是找不到hadoop的路径 然后在代码里面加一句

System.setProperty("hadoop.home.dir", "c:/home/mouap/hadoop");


> >之后继续运行,任然是没有反应, 输出这些东西

Picked up _JAVA_OPTIONS:   -Dawt.useSystemAAFontSettings=gasp
Usage: wordcount <in> <out>

发现 

String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
System.out.println("url:" + conf.get("fs.default.name"));
System.out.println("test1 = "+ otherArgs.length);
System.out.println("test1 new GenericOptionsParser(conf, args) = "+new GenericOptionsParser(conf, args));

长度等于0 直接退出了 


11 这里一直卡住 也没有找到办法解决 有大神能指导下不。????

之后我查了很多资料 将配置文件内容都改成 如下

core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/mouap/hadoop/tmp</value>
</property>

hdfs.site.xml
<property>  
        <name>dfs.namenode.name.dir</name>  
        <value>file:///home/mouap/hadoop/dfs/name</value>  
    </property>
<property>  
        <name>dfs.datanode.data.dir</name>  
        <value>file:///home/mouap/hadoop/dfs/data</value>  
    </property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

mapred-site.xml.temple
<property>
<name>mapreduce.jobtracker.address</name>
<value>localhost:9001</value>
</property>


yarn-site.xml
<property>  
        <name>yarn.nodemanager.aux-services</name>  
        <value>mapreduce_shuffle</value>  
    </property>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
   </property>

然后 将 /home/mouap/hadoop/tem下的内容都清除了 还有 /home/mouap/hadoop/dfs/data 里面清理  /home/mouap/hadoop/dfs/name 里面清理

然后 格式化 -format 然后 启动 服务

24147 NodeManager
25717 Jps
23736 SecondaryNameNode
23432 NameNode
23560 DataNode
24015 ResourceManager    有这些服务 按照资料 这表示伪分布成功

但是eclipse还是跑步起来  
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();  获取不到 ??

之后整了一下午 我去 终于发下了 2.5的 con方法不一样  参考官网的代码

package mouapTest;


import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.net.URI;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.StringTokenizer;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.StringUtils;


import sun.security.jca.GetInstance;




public class WorldCount2_5_2 {

public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {

static enum CountersEnum { INPUT_WORDS }

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

private boolean caseSensitive;
private Set<String> patternsToSkip = new HashSet<String>();

private Configuration conf;
private BufferedReader fis;

@Override
public void setup(Context context) throws IOException,InterruptedException {
  conf = context.getConfiguration();
  caseSensitive = conf.getBoolean("wordcount.case.sensitive", true);
  if (conf.getBoolean("wordcount.skip.patterns", true)) {
  
    URI[] patternsURIs = Job.getInstance(conf).getCacheFiles();
    
    for (URI patternsURI : patternsURIs) {
      Path patternsPath = new Path(patternsURI.getPath());
      String patternsFileName = patternsPath.getName().toString();
      parseSkipFile(patternsFileName);
    }
  }
}

private void parseSkipFile(String fileName) {
  try {
    fis = new BufferedReader(new FileReader(fileName));
    String pattern = null;
    while ((pattern = fis.readLine()) != null) {
      patternsToSkip.add(pattern);
    }
  } catch (IOException ioe) {
    System.err.println("Caught exception while parsing the cached file '"+ StringUtils.stringifyException(ioe));
  }
}

@Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
  String line = (caseSensitive) ?
      value.toString() : value.toString().toLowerCase();
  for (String pattern : patternsToSkip) {
    line = line.replaceAll(pattern, "");
  }
  
  StringTokenizer itr = new StringTokenizer(line);
  
  while (itr.hasMoreTokens()) {
    word.set(itr.nextToken());
    context.write(word, one);
    Counter counter = context.getCounter(CountersEnum.class.getName(),
        CountersEnum.INPUT_WORDS.toString());
    counter.increment(1);
  }
}
}


public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
  int sum = 0;
  for (IntWritable val : values) {
    sum += val.get();
  }
  result.set(sum);
  context.write(key, result);
}
}


public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();
 
// System.setProperty("hadoop.home.dir", "c:/home/mouap/hadoop");
System.setProperty("hadoop.home.dir", "c:/home/mouap/hadoop/etc/hadoop");
 
GenericOptionsParser optionParser = new GenericOptionsParser(conf, args);
 
String[] remainingArgs = optionParser.getRemainingArgs();
 
System.out.println("test1 = " +  remainingArgs.length );
 
if (!(remainingArgs.length != 2 || remainingArgs.length != 4)) {
 
   System.err.println("Usage: wordcount <in> <out> [-skip skipPatternFile]");
   System.exit(2);
   
}
 
System.out.println("test2 = " +  remainingArgs.length );
 
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WorldCount2_5_2.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

List<String> otherArgs = new ArrayList<String>();
for (int i=0; i < remainingArgs.length; ++i) {
  if ("-skip".equals(remainingArgs[i])) {
    job.addCacheFile(new Path(remainingArgs[++i]).toUri());
    job.getConfiguration().setBoolean("wordcount.skip.patterns", true);
  } else {
    otherArgs.add(remainingArgs[i]);
  }
}
 
FileInputFormat.addInputPath(job, new Path(otherArgs.get(0)));
FileOutputFormat.setOutputPath(job, new Path(otherArgs.get(1)));

System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}


新建了一个class 然后,在run里面配置好参数 in out 然后运行

依旧报错

test1 = 2
test2 = 2

2015-05-01 17:57:55,497 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-01 17:57:55,663 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2015-05-01 17:57:55,665 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-05-01 17:57:55,854 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(259)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2015-05-01 17:57:55,868 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(441)) - Cleaning up the staging area file:/tmp/hadoop-mouap/mapred/staging/mouap911069748/.staging/job_local911069748_0001
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/home/mouap/workspace/MouapTest/in
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)

at mouapTest.WorldCount2_5_2.main(WorldCount2_5_2.java:148)

但是可以看到 con能获取到数据 打印出来为 2

12  根据提示缺少in 目录 于是在 路径下 建立一个In路径 然后 在运行

Caused by: java.lang.ClassNotFoundException: org.apache.avro.io.DatumReader 又出现 缺少包 然后将相关包导入

13 然后继续运行 但是没有显示 结果 不知道为什么 有大神看看没 


test1 = 2
test2 = 2
2015-05-01 18:25:58,370 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-01 18:25:58,621 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2015-05-01 18:25:58,624 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-05-01 18:25:58,886 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(259)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2015-05-01 18:25:58,895 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 0
2015-05-01 18:25:58,915 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:0
2015-05-01 18:25:59,060 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local1120978678_0001
2015-05-01 18:25:59,135 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-mouap/mapred/staging/mouap1120978678/.staging/job_local1120978678_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-05-01 18:25:59,140 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-mouap/mapred/staging/mouap1120978678/.staging/job_local1120978678_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-05-01 18:25:59,266 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-mouap/mapred/local/localRunner/mouap/job_local1120978678_0001/job_local1120978678_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-05-01 18:25:59,270 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-mouap/mapred/local/localRunner/mouap/job_local1120978678_0001/job_local1120978678_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-05-01 18:25:59,276 INFO  [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/
2015-05-01 18:25:59,277 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local1120978678_0001
2015-05-01 18:25:59,277 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
2015-05-01 18:25:59,285 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2015-05-01 18:25:59,338 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
2015-05-01 18:25:59,340 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
2015-05-01 18:25:59,347 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
2015-05-01 18:25:59,348 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1120978678_0001_r_000000_0
2015-05-01 18:25:59,382 INFO  [pool-3-thread-1] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : [ ]
2015-05-01 18:25:59,384 INFO  [pool-3-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@762e47cc
2015-05-01 18:25:59,394 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) - MergerManager: memoryLimit=619865664, maxSingleShuffleLimit=154966416, mergeThreshold=409111360, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2015-05-01 18:25:59,397 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1120978678_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2015-05-01 18:25:59,402 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2015-05-01 18:25:59,405 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2015-05-01 18:25:59,406 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 0 in-memory map-outputs and 0 on-disk map-outputs
2015-05-01 18:25:59,407 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 0 files, 0 bytes from disk
2015-05-01 18:25:59,407 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce
2015-05-01 18:25:59,410 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(591)) - Merging 0 sorted segments
2015-05-01 18:25:59,410 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(690)) - Down to the last merge-pass, with 0 segments left of total size: 0 bytes
2015-05-01 18:25:59,411 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2015-05-01 18:25:59,421 INFO  [pool-3-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2015-05-01 18:25:59,435 INFO  [pool-3-thread-1] mapred.Task (Task.java:done(1001)) - Task:attempt_local1120978678_0001_r_000000_0 is done. And is in the process of committing
2015-05-01 18:25:59,448 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2015-05-01 18:25:59,448 INFO  [pool-3-thread-1] mapred.Task (Task.java:commit(1162)) - Task attempt_local1120978678_0001_r_000000_0 is allowed to commit now
2015-05-01 18:25:59,449 INFO  [pool-3-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local1120978678_0001_r_000000_0' to file:/home/mouap/workspace/MouapTest/out/_temporary/0/task_local1120978678_0001_r_000000
2015-05-01 18:25:59,450 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
2015-05-01 18:25:59,450 INFO  [pool-3-thread-1] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1120978678_0001_r_000000_0' done.
2015-05-01 18:25:59,451 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1120978678_0001_r_000000_0
2015-05-01 18:25:59,451 INFO  [Thread-11] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
2015-05-01 18:26:00,279 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local1120978678_0001 running in uber mode : false
2015-05-01 18:26:00,283 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 0% reduce 100%
2015-05-01 18:26:00,290 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local1120978678_0001 completed successfully
2015-05-01 18:26:00,300 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 27
File System Counters
FILE: Number of bytes read=22
FILE: Number of bytes written=230257
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=0
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=76021760
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Output Format Counters 
Bytes Written=8














评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值