hadoop 启动不了datanode ulimit -a for user root

启动hdfs  datanode启动日志输出


[root@linux128 hadoop]# cat /opt/lagou/servers/hadoop-2.9.2/logs/hadoop-root-datanode-linux128.out
ulimit -a for user root
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14988
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14988
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

查看hadoop-root-datanode-linux128.log  日志:

2020-07-25 15:49:21,890 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: The path component: '/var/lib/hadoop-hdfs' in '/var/lib/hadoop-hdfs/dn_socket' has permissions 0755 uid 993 and gid 991. It is not protected because it is owned by a user who is not root and not the effective user: '0'. This might help: 'chown root /var/lib/hadoop-hdfs' or 'chown 0 /var/lib/hadoop-hdfs'. For more information: https://wiki.apache.org/hadoop/SocketPathSecurity
        at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:193)
        at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1171)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1137)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1369)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:495)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2645)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2789)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2813)

根据日志内容提示修改 /var/lib/hadoop-hdfs 拥有者 。各datanode节点执行chown root /var/lib/hadoop-hdfs。

重启hdfs,启动成功。

原因是由于修改了hdfs-site.xml,使用root账户,添加了配置:

<!--打开短路读取开关 -->
<!-- 打开短路读取配置-->
<property>
 <name>dfs.client.read.shortcircuit</name>
 <value>true</value>
 </property>
<!--这是⼀个UNIX域套接字的路径,将⽤于DataNode和本地HDFS客户机之间的通信 -->
 <property>
 <name>dfs.domain.socket.path</name>
 <value>/var/lib/hadoop-hdfs/dn_socket</value>
 </property>
<!--block存储元数据信息开发开关 -->
<property>
 <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
 <value>true</value>
</property> <property>
 <name>dfs.client.file-block-storage-locations.timeout</name>
 <value>30000</value>
</property>

/var/lib/hadoop-hdfs/ 默认是 hdfs:hadoop 用户。使用的是root用户启动hdfs,修改为 root:hadoop 可解决问题。

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值