Hadoop之——crontab 定时运行 hadoop 任务(以Hadoop用户身份运行crontab报错)

版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/l1028386804/article/details/95964457

转载请注明出处:https://blog.csdn.net/l1028386804/article/details/95964457

问题:

在/etc/crontab 里添加任务,想以hadoop 用户去执行这个脚本。

*/5 * * * * hadoop /bin/sh /home/hadoop/runhadoop.sh

一直报错,如下:

crontab Error creating temp dir in hadoop.tmp.dir file:due to Permission den

解决方案:

将当前用户切换到hadoop

su hadoop

输入如下命令配置crontab。

crontab -e
*/5 * * * * /bin/sh /home/hadoop/run_hadoop_program.sh

切到root用户下重启crond

su root
service crond restart

就生效了。

注意:需要在shell脚本里,把hadoop 环境变量都加上。

 

展开阅读全文

hadoop运行PI报错

06-30

运行 hadoop jar ~/hadoop-2.5.2/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar pi 10 10报异常rn异常如下rn17/06/30 05:30:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablernWrote input for Map #0rnWrote input for Map #1rnWrote input for Map #2rnWrote input for Map #3rnWrote input for Map #4rnWrote input for Map #5rnWrote input for Map #6rnWrote input for Map #7rnWrote input for Map #8rnWrote input for Map #9rnStarting Jobrn17/06/30 05:30:55 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.10.100:18040rn17/06/30 05:30:58 INFO input.FileInputFormat: Total input paths to process : 10rn17/06/30 05:30:59 INFO mapreduce.JobSubmitter: number of splits:10rn17/06/30 05:31:01 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1498821078171_0002rn17/06/30 05:31:05 INFO impl.YarnClientImpl: Submitted application application_1498821078171_0002rn17/06/30 05:31:05 INFO mapreduce.Job: The url to track the job: http://master:18088/proxy/application_1498821078171_0002/rn17/06/30 05:31:05 INFO mapreduce.Job: Running job: job_1498821078171_0002rn17/06/30 05:31:06 INFO mapreduce.Job: Job job_1498821078171_0002 running in uber mode : falsern17/06/30 05:31:06 INFO mapreduce.Job: map 0% reduce 0%rn17/06/30 05:31:06 INFO mapreduce.Job: Job job_1498821078171_0002 failed with state FAILED due to: Application application_1498821078171_0002 failed 2 times due to Error launching appattempt_1498821078171_0002_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. rnThis token is expired. current time is 1498852238058 found 1498826466312rn at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)rn at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)rn at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)rn at java.lang.reflect.Constructor.newInstance(Constructor.java:526)rn at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)rn at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)rn at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)rn at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)rn at java.lang.Thread.run(Thread.java:745)rn. Failing the application.rn17/06/30 05:31:06 INFO mapreduce.Job: Counters: 0rnJob Finished in 12.162 secondsrnjava.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/zkpk/QuasiMonteCarlo_1498825843150_1916352402/out/reduce-outrn at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)rn at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)rn at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)rn at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)rn at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1749)rn at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1773)rn at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)rn at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)rn at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)rn at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:606)rn at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)rn at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)rn at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:606)rn at org.apache.hadoop.util.RunJar.main(RunJar.java:212)rn求大神教我,刚学的对着文档敲下来最后一步启动验证,运行PI报错了 论坛

运行hadoop mapreduce程序报错

09-06

与代码无关,应该是电脑配置问题rnjava.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()Jrn rn全部过程rn2017-09-06 15:08:13,406 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - rn2017-09-06 15:08:13,409 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1460)) - Starting flush of map outputrn2017-09-06 15:08:13,410 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1482)) - Spilling map outputrn2017-09-06 15:08:13,410 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1483)) - bufstart = 0; bufend = 108; bufvoid = 104857600rn2017-09-06 15:08:13,410 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1485)) - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600rn2017-09-06 15:08:13,421 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1667)) - Finished spill 0rn2017-09-06 15:08:13,425 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1038)) - Task:attempt_local1766184705_0001_m_000000_0 is done. And is in the process of committingrn2017-09-06 15:08:13,431 INFO [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.rn2017-09-06 15:08:13,443 WARN [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:run(560)) - job_local1766184705_0001rnjava.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()Jrn at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)rn at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)rnCaused by: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()Jrn at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:872)rn at org.apache.hadoop.mapred.Task.updateCounters(Task.java:1021)rn at org.apache.hadoop.mapred.Task.done(Task.java:1040)rn at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)rn at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)rn at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)rn at java.util.concurrent.FutureTask.run(FutureTask.java:266)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)rn at java.lang.Thread.run(Thread.java:745)rn2017-09-06 15:08:14,077 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_local1766184705_0001 running in uber mode : falsern2017-09-06 15:08:14,078 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0%rn2017-09-06 15:08:14,080 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Job job_local1766184705_0001 failed with state FAILED due to: NArn2017-09-06 15:08:14,090 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Counters: 10rn Map-Reduce Frameworkrn Map input records=12rn Map output records=12rn Map output bytes=108rn Map output materialized bytes=0rn Input split bytes=104rn Combine input records=0rn Spilled Records=0rn Failed Shuffles=0rn Merged Map outputs=0rn File Input Format Counters rn Bytes Read=132rnFinished 论坛

Hadoop DataNode不运行

05-15

环境VirtualBox+CentOS7rn使用一个master两个slave按步骤全部配置完毕rn在Maser上格式化NameNode后启动服务rn#sbin/start-allrn#jpsrn 3939 NameNodern 4100 SecondaryNameNodern 4700 Jpsrnrn在宿主机上打开网页:rnhttp://192.168.56.201:50070/dfshealth.html#tab-datanodern看不到存在的DataNodernrn在master上执行:rn#hadoop fs -copyFromLocal test.file /test.filern.... There are 0 datanode(s) running and no nodes are exclude in this operation.rnrn然后查看两个Slave,发现有DataNode进程rnSlave1:rn#jpsrn 3997 Jpsrn 3695 DataNodernSlave2:rn#jpsrn 3988 Jpsrn 3680 DataNodernrn再检查master和slave的防火墙:rn#service iptables stoprn.... Failed to issue method call: Unit iptables.service not loaded.rnrn再为master和slave的hdfs目录增加权限 :rn#chmod g-w /home/hadoop/tmprn#chmod g-w /home/hadoop/dfs/namern#chmod g-w /home/hadoop/dfs/datarnrn并清空以上三个目录rn然后停止dhfs:#sbin/stop-allrn然后再重新格式化namenodernrn发现问题依然存在,依然没有datanode。rnrn但如果换个方法启动:rn#sbin/hadoop-daemon.sh start namenodern#sbin/hadoop-daemon.sh start datanode rn#jpsrn 5600 NameNodern 5700 Jpsrn 5655 DataNodernrn再执行:rn#hadoop fs -copyFromLocal test.file /test.filern成功!rn打开网页:http://192.168.56.201:50070/dfshealth.html#tab-datanodern发现dataNode和NameNode都在Master上了。rnrn而我在slaves文件中的配置并没有将Master的地址写进去。rn不知道这些问题是为什么?有什么办法能够将DataNode配置到Slave上,配置我检查了好几遍了,也拷贝到了slave上了好几次,可以确认slave上的配置文件和master是一致的,ssh免密访问也是通的,甚至执行start-all.sh后,slave上也能正确的运行datanode进程,但就是无法访问,防火墙我也检查过了,目录权限了增加了,还有什么是我没想到的呢?rn 论坛

没有更多推荐了,返回首页