By Adamhuan | 2018年1月28日
问题如题所示,你可能无法访问到HDFS中的数据。
具体如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [hdfs@cloudera-1 ~]$ hdfs dfs -ls / Warning: fs.defaultFS is not set when running "ls" command. Found 22 items -rw-r--r-- 1 root root 0 2018-01-19 15:54 /.autorelabel drwxr-xr-x - root root 248 2018-01-20 04:22 /backup dr-xr-xr-x - root root 49152 2018-01-22 16:48 /bin dr-xr-xr-x - root root 4096 2018-01-17 14:42 /boot drwxr-xr-x - root root 3200 2018-01-20 23:34 /dev drwxr-xr-x - root root 12288 2018-01-25 18:56 /etc drwxr-xr-x - root root 56 2018-01-22 21:04 /home dr-xr-xr-x - root root 4096 2018-01-20 15:36 /lib dr-xr-xr-x - root root 73728 2018-01-20 06:44 /lib64 drwxr-xr-x - root root 23 2018-01-19 20:19 /media drwxr-xr-x - root root 6 2016-11-05 23:38 /mnt drwxr-xr-x - root root 49 2018-01-25 18:56 /opt dr-xr-xr-x - root root 0 2018-01-20 23:34 /proc dr-xr-x--- - root root 4096 2018-01-21 00:04 /root drwxr-xr-x - root root 1220 2018-01-22 16:48 /run dr-xr-xr-x - root root 20480 2018-01-20 06:45 /sbin drwxr-xr-x - root root 4096 2018-01-25 18:55 /software drwxr-xr-x - root root 6 2016-11-05 23:38 /srv dr-xr-xr-x - root root 0 2018-01-20 23:34 /sys drwxrwxrwt - root root 4096 2018-01-28 16:22 /tmp drwxr-xr-x - root root 177 2018-01-19 20:19 /usr drwxr-xr-x - root root 4096 2018-01-25 18:56 /var [hdfs@cloudera-1 ~]$ |
产生这个问题的原因:
你在非DataNode上执行HDFS的访问命令:
1 2 3 | [hdfs@cloudera-1 ~]$ ps -ef | grep datanode hdfs 24452 23005 0 16:35 pts/0 00:00:00 grep --color=auto datanode [hdfs@cloudera-1 ~]$ |
如果你在一个DataNode上执行命令,则一切正常:
1 2 3 4 5 6 7 8 9 10 11 | [root@cloudera-2 ~]# su - hdfs [hdfs@cloudera-2 ~]$ hdfs dfs -ls / Found 2 items drwxrwxrwx - hdfs supergroup 0 2018-01-21 01:35 /tmp drwxr-xr-x - hdfs supergroup 0 2018-01-21 01:35 /user [hdfs@cloudera-2 ~]$ [hdfs@cloudera-2 ~]$ ps -ef | grep datanode hdfs 6739 1549 1 Jan21 ? 02:54:41 /usr/java/jdk1.8.0_151/bin/java -Dproc_datanode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-DATANODE-cloudera-2.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/hdfs_hdfs-DATANODE-865c1ba4aab71010744c58fcc9a10009_pid6739.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode hdfs 6745 6739 0 Jan21 ? 00:00:00 python2.7 /usr/lib64/cmf/agent/build/env/bin/cmf-redactor /usr/lib64/cmf/service/hdfs/hdfs.sh datanode hdfs 25719 24394 0 16:35 pts/0 00:00:00 grep --color=auto datanode [hdfs@cloudera-2 ~]$ |
————————————
Done。