but there is no HDFS_NAMENODE_USER defined

看着书尝试安装一下Hadoop服务遇到了如下报错:
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
处理:
在/usr/local/hadoop-3.0.2/sbin/start-dfs.sh中添加报错中的“HDFS_NAMENODE_USER=root”

hadoop下载地址 http://archive.apache.org/dist/hadoop/core/
报错
[root@web78 hadoop-1.2.1]# bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 50
Number of Maps = 10
Samples per Map = 50
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:567)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
参考资料
https://blog.csdn.net/weiyongle1996/article/details/74094989/
处理

检查在core-site.xml中配置的hadoop.tmp.dir对应目录发现是空的
停止集群 bin/stop-all.sh
重新格式化 ./hadoop namenode -format
重启集群 ./start-all.sh

按照上面的操作完成调试之后重新执行的成功结果如下:
[root@web78 hadoop-1.2.1]# bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 50
Number of Maps = 10
Samples per Map = 50
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/06/08 16:15:25 INFO mapred.FileInputFormat: Total input paths to process : 10
18/06/08 16:15:26 INFO mapred.JobClient: Running job: job_201806081614_0001
18/06/08 16:15:27 INFO mapred.JobClient: map 0% reduce 0%
18/06/08 16:15:41 INFO mapred.JobClient: map 20% reduce 0%
18/06/08 16:15:50 INFO mapred.JobClient: map 40% reduce 0%
18/06/08 16:16:11 INFO mapred.JobClient: map 50% reduce 0%
18/06/08 16:16:14 INFO mapred.JobClient: map 70% reduce 0%
18/06/08 16:16:15 INFO mapred.JobClient: map 80% reduce 0%
18/06/08 16:16:32 INFO mapred.JobClient: map 90% reduce 0%
18/06/08 16:16:34 INFO mapred.JobClient: map 90% reduce 26%
18/06/08 16:16:37 INFO mapred.JobClient: map 100% reduce 26%
18/06/08 16:16:40 INFO mapred.JobClient: map 100% reduce 30%
18/06/08 16:16:45 INFO mapred.JobClient: map 100% reduce 100%
18/06/08 16:16:46 INFO mapred.JobClient: Job complete: job_201806081614_0001
18/06/08 16:16:46 INFO mapred.JobClient: Counters: 30
18/06/08 16:16:46 INFO mapred.JobClient: Job Counters
18/06/08 16:16:46 INFO mapred.JobClient: Launched reduce tasks=1
18/06/08 16:16:46 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=214169
18/06/08 16:16:46 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
18/06/08 16:16:46 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
18/06/08 16:16:46 INFO mapred.JobClient: Launched map tasks=10
18/06/08 16:16:46 INFO mapred.JobClient: Data-local map tasks=10
18/06/08 16:16:46 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=63724
18/06/08 16:16:46 INFO mapred.JobClient: File Input Format Counters
18/06/08 16:16:46 INFO mapred.JobClient: Bytes Read=1180
18/06/08 16:16:46 INFO mapred.JobClient: File Output Format Counters
18/06/08 16:16:46 INFO mapred.JobClient: Bytes Written=97
18/06/08 16:16:46 INFO mapred.JobClient: FileSystemCounters
18/06/08 16:16:46 INFO mapred.JobClient: FILE_BYTES_READ=226
18/06/08 16:16:46 INFO mapred.JobClient: HDFS_BYTES_READ=2430
18/06/08 16:16:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=616854
18/06/08 16:16:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
18/06/08 16:16:46 INFO mapred.JobClient: Map-Reduce Framework
18/06/08 16:16:46 INFO mapred.JobClient: Map output materialized bytes=280
18/06/08 16:16:46 INFO mapred.JobClient: Map input records=10
18/06/08 16:16:46 INFO mapred.JobClient: Reduce shuffle bytes=280
18/06/08 16:16:46 INFO mapred.JobClient: Spilled Records=40
18/06/08 16:16:46 INFO mapred.JobClient: Map output bytes=180
18/06/08 16:16:46 INFO mapred.JobClient: Total committed heap usage (bytes)=2035286016
18/06/08 16:16:46 INFO mapred.JobClient: CPU time spent (ms)=8210
18/06/08 16:16:46 INFO mapred.JobClient: Map input bytes=240
18/06/08 16:16:46 INFO mapred.JobClient: SPLIT_RAW_BYTES=1250
18/06/08 16:16:46 INFO mapred.JobClient: Combine input records=0
18/06/08 16:16:46 INFO mapred.JobClient: Reduce input records=20
18/06/08 16:16:46 INFO mapred.JobClient: Reduce input groups=20
18/06/08 16:16:46 INFO mapred.JobClient: Combine output records=0
18/06/08 16:16:46 INFO mapred.JobClient: Physical memory (bytes) snapshot=1749561344
18/06/08 16:16:46 INFO mapred.JobClient: Reduce output records=0
18/06/08 16:16:46 INFO mapred.JobClient: Virtual memory (bytes) snapshot=7826571264
18/06/08 16:16:46 INFO mapred.JobClient: Map output records=20
Job Finished in 81.206 seconds
Estimated value of Pi is 3.16000000000000000000

创建目录 bin/hadoop dfs -mkdir /hadoop/word
传输文件 bin/hadoop fs -put /root/hadoop/input.txt /hadoop/word/
查看上传的文件 bin/hadoop dfs -ls /hadoop/word
cd 存放测试代码目录包含(统计文件input.txt,map代码mapper.py,reduce代码reducer.py)
/usr/local/hadoop-1.2.1/bin/hadoop jar /usr/local/hadoop-1.2.1/contrib/streaming/hadoop-streaming-1.2.1.jar -file ./mapper.py -mapper ./mapper.py -file ./reducer.py -reducer ./reducer.py -input /hadoop/word -output /hadoop/output

下载mrjob-0.4.2 https://pypi.org/project/mrjob/0.4.2/#files

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值