1、ERROR: Unable to write in /opt/hadoop-3.3.0/logs. Aborting. Starting datanodes
解决:sudo chown -R hadoop:hadoop /usr/local/hadoop
2、790 WARN util.NativeCodeLoader: Unable to load native-hadoop library for you
解决:/opt/hadoop-3.3.0/etc/hadoop/log4j.properties添加log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
3、localhost: ERROR: Cannot set priority of datanode process
hdfs-site.xml添加
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop-3.3.0/hdfsDir/name</value>
<description>datanode上存储hdfs名字空间元数据</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop-3.3.0/hdfsDir/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
4、Cannot create xxx/xxx.txt. Name node is in safe mode.
hadoop dfsadmin -safemode leave
5、There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
找到hadoop安装目录下./data/dfs/data里面的current文件夹删除
然后从新执行一下 hadoop namenode -format
6、ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode
hadoop文件夹下的data和name文件夹里面的current/version,发现clusterID不一致.
错误原因:多次错误初始化NameNode,导致namenode和datanode的namespaceID和clusterID不一致
解决方法:删除hdfs中配置的data目录下的所有文件(core-site.xml中配置的hadoop.tmp.dir)
1、删除Hadoop文件夹…/tmp/dfs 里面的data和name文件夹
2、初始化NameNode:hdfs namenode -format
3、启动HDFS:start-dfs.sh