在学习impala时候配置了HDFS集群的短路读取后重启hdfs发现DataNode未启动,一直处于安全模式中
查看DataNode日志
java.io.IOException: The path component: '/var/lib/hadoop-hdfs' in '/var/lib/hadoop-hdfs/dn_socket' has permissions 0755 uid 993 and gid 991. It is not protected because it is owned by a user who is not root and not the effective user: '0'. This might help: 'chown root /var/lib/hadoop-hdfs' or 'chown 0 /var/lib/hadoop-hdfs'. For more information: https://wiki.apache.org/hadoop/SocketPathSecurity
at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:193)
at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1171)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1137)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1369)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:495)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2645)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2789)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2813)
发现问题:
it is owned by a user who is not root and not the effective user: ‘0’.
然后去看短路配置的目录发现拥有者并不是root
解决方案:
chown root /var/lib/hadoop-hdfs’ or ‘chown 0 /var/lib/hadoop-hdfs’
什么是短路读取:
就是Client与DataNode属于同⼀节点,⽆需再经过⽹络传输数据,直接本地读取。
要配置短路本地读,需要验证本机Hadoop是否有libhadoop.so;
/opt/lagou/servers/hadoop-2.9.2/lib/native
短路读取配置步骤:
- 创建短路读取本地中转站
#所有节点创建⼀下⽬录
mkdir -p /var/lib/hadoop-hdfs
- 修改hdfs-site.xml
<!--添加如下内容 -->
<!--打开短路读取开关 -->
<!-- 打开短路读取配置-->
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<!--这是⼀个UNIX域套接字的路径,将⽤于DataNode和本地HDFS客户机之间的通信 -->
<property>
<name>dfs.domain.socket.path</name>
<value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>
<!--block存储元数据信息开发开关 -->
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property> <property>
<name>dfs.client.file-block-storage-locations.timeout</name>
<value>30000</value>
</property>
综上我们得知是因为我们创建的短路读取的目录拥有者不是root,所以我们需要在所有节点上执行 chown root /var/lib/hadoop-hdfs’ 。然后重启