最近遇到一个问题,不知怎么突然运行hadoop的map程序报错,困扰了我很久,现在来给大家分享分享。。
错误信息
2017-05-18 21:34:22,104 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
2017-05-18 21:34:22,642 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2017-05-18 21:34:22,689 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2017-05-18 21:34:22,748 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2017-05-18 21:34:23,064 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(198)) - number of splits:3
2017-05-18 21:34:23,263 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(287)) - Submitting tokens for job: job_1495112477030_0010
2017-05-18 21:34:23,521 INFO [main] mapred.YARNRunner (YARNRunner.java:createApplicationSubmissionContext(371)) - Job jar is not present. Not adding any jar to the list of resources.
2017-05-18 21:34:23,598 INFO [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(273)) - Submitted application application_1495112477030_0010
2017-05-18 21:34:23,661 INFO [main] mapreduce.Job (Job.java:submit(1294)) - The url to track the job: http://ubuntu-zj0633:8088/proxy/application_1495112477030_0010/
2017-05-18 21:34:23,662 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1339)) - Running job: job_1495112477030_0010
2017-05-18 21:34:30,858 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_1495112477030_0010 running in uber mode : false
2017-05-18 21:34:30,859 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0%
2017-05-18 21:34:39,078 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 33% reduce 0%
2017-05-18 21:34:39,087 INFO [main] mapreduce.Job (Job.java:printTaskEvents(1406)) - Task Id : attempt_1495112477030_0010_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.map.count.TokenizerMapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: Class com.map.count.TokenizerMapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
... 8 more
解决办法
方式一:
网上通用方法根本行不通,看到一个网友如下解决方法:
首先还是 确保按照通用的方法:
在main添加
job.setJarByClass(WordCount.class); (一般版本都存在这个,下面会知道,这个是问题所在,其实基本无用)。方式二:
1,将该项目打包成.jar结尾的jar包在一个指定目录。
2,在项目中job提交处添加 job.setJar(jar路径); 其中jar路径为刚才你导出文件的绝对路径,如/home/hadoop/workspace/WordCount.jar
3,再次运行,错误避免
方式三:
1,删除项目目录下的core-site.xml,mapred-site.xml等配置信息,不用提交到集群席上运行。
2,在项目中的输入输出路径,加上hdfs://主节点的url
3,再次运行,问题解决。
对于这个问题的说明
问题中的警告
2017-05-18 21:34:22,689 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String)
)这个语句设置作业Jar包没有成功。这是为什么呢?
因为这个方法使用了WordCount.class的类加载器来寻找包含该类的Jar包,然后设置该Jar包为作业所用的Jar包。但是我们的作业 Jar包是在程序运行时才打包的,而WordCount.class的类加载器是AppClassLoader,运行后我们无法改变它的搜索路径,所以使用setJarByClass是无法设置作业Jar包的。我们必须使用JobConf里的setJar来直接设置作业Jar包,像下面一样:
((JobConf)job.getConfiguration()).setJar(jarFile);(此方法的jarFile变量不清楚应该是什么,主要是对setJar的功能不了解,这里不讨论这个方法,解决方法还是使用最初的)
其实,方式二的方式就是一种集群的运行方式,需要指定相应的jar文件,而方式三,是基于单机本地模式运行的,只是将输入输出文件提交到hdfs上,实际跑的程序还是在本地。