Hadoop WordCount.java 开发 (java 变现 hadoop mapreduce 程序 - WordCount)

  1. java程序如下
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
//    job.setJar("WordCount.jar");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

2. 生成 jar 包

     hadoop  com.sun.tools.javac.Main WordCount.java

这个是提示了如下错误:

     Error: Could not find or load main class com.sun.tools.javac.Main

其实是环境变量配置不对, 在 hadoop-env.sh 中增加 HADOOP_CLASSPATH 配置:

  export HADOOP_CLASSPATH=".:${JAVA_HOME}/lib/tools.jar"

之后在执行:

    hadoop  com.sun.tools.javac.Main WordCount.java

    jar cf WordCount.jar WordCount*.class

生成了WordCount.jar

3. 执行mapreduce程序

hadoop jar WordCount.jar WordCount /input/ /output/

此时会提示如下错误

Job jar is not present. Not adding any jar to the list of resources.

 java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found

19/01/12 19:45:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/01/12 19:45:46 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/01/12 19:45:46 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
19/01/12 19:45:46 INFO input.FileInputFormat: Total input paths to process : 1
19/01/12 19:45:47 INFO mapreduce.JobSubmitter: number of splits:1
19/01/12 19:45:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1547121080184_0008
19/01/12 19:45:47 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
19/01/12 19:45:47 INFO impl.YarnClientImpl: Submitted application application_1547121080184_0008
19/01/12 19:45:47 INFO mapreduce.Job: The url to track the job: http://tc-bjsecond28.tc:8088/proxy/application_1547121080184_0008/
19/01/12 19:45:47 INFO mapreduce.Job: Running job: job_1547121080184_0008
19/01/12 19:45:53 INFO mapreduce.Job: Job job_1547121080184_0008 running in uber mode : false
19/01/12 19:45:53 INFO mapreduce.Job:  map 0% reduce 0%
19/01/12 19:45:57 INFO mapreduce.Job: Task Id : attempt_1547121080184_0008_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
        at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2171)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2265)
        ... 8 more

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

19/01/12 19:46:01 INFO mapreduce.Job: Task Id : attempt_1547121080184_0008_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
        at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2171)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2265)
        ... 8 more

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

19/01/12 19:46:04 INFO mapreduce.Job: Task Id : attempt_1547121080184_0008_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
        at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: Class WordCount$TokenizerMapper not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2171)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2265)
        ... 8 more

19/01/12 19:46:09 INFO mapreduce.Job:  map 100% reduce 100%
19/01/12 19:46:10 INFO mapreduce.Job: Job job_1547121080184_0008 failed with state FAILED due to: Task failed task_1547121080184_0008_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

19/01/12 19:46:11 INFO mapreduce.Job: Counters: 13
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=4
                Other local map tasks=3
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=8766
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=8766
                Total time spent by all reduce tasks (ms)=0
                Total vcore-milliseconds taken by all map tasks=8766
                Total vcore-milliseconds taken by all reduce tasks=0
                Total megabyte-milliseconds taken by all map tasks=8976384
                Total megabyte-milliseconds taken by all reduce tasks=

引起问题的一个可能原因是 在搜索/查找 WordCount$TokenizerMapper.class 的时候,优先在 当前目录 下面查找,所以解决方案有两种:

     3.1 把 WordCount.jar  移动到 (cp / mv) 到一个单独目录 / 或者在当面目录下面删除 WourdCount*.class 文件  

   or 

      3.2 在代码里面增加 

           

job.setJar("WordCount.jar");

        并且重新生成 jar包

问题就解决了

            

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值