在两台虚拟机上同时启动namenode

除主节点master外在slave1上也启动namenode


给我愁的啊~~~~~~~~
先说主要部分:
其实一般配置文件会出问题,但是这次我的配置文件没有问题,但就是只有master的namenode启动了,slave2namenode节点怎么也出不来
但是我在启动start-dfs.sh的时候

[root@master journaldata]# start-dfs.sh
Starting namenodes on [master slave1]   // 就是这里,气死人
上一次登录:三 6月 16 19:54:25 CST 2021pts/0 上
Starting datanodes
上一次登录:三 6月 16 20:07:29 CST 2021pts/0 上
Starting journal nodes [slave2 slave1 master]
上一次登录:三 6月 16 20:07:32 CST 2021pts/0 上
slave1: journalnode is running as process 3588.  Stop it first.
slave2: journalnode is running as process 2618.  Stop it first.
master: journalnode is running as process 6995.  Stop it first.
Starting ZK Failover Controllers on NN hosts [master slave1]

是不是很奇怪,明明显示在slave1上已经启动,但是查看节点就是没有

[root@slave1 hadoop]# jps
1251 QuorumPeerMain
3588 JournalNode
3754 DataNode
3933 Jps

查看slave2的namenode的日志文件,发现报错

2021-06-16 20:07:36,719 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:250)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1132)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:747)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:652)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:966)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:939)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1772)
2021-06-16 20:07:36,722 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted.
2021-06-16 20:07:36,768 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at slave1/192.168.220.8

原来是节点后来挂掉了,他说我没格式化,行,我又格式化了一遍,还是不行,前前后后格式化了好几遍,(你能理解我的痛苦吧,死亡,当时心里就三个字:杀了我(cnm))
呃,大家都是文明人,遇到问题解决问题🐎,
废话不多说,以上全是吐槽,问题的解决关键就是在master格式化之后要在slave1(也就是你想启动但是没有启动namenode的虚拟机)下输入命令

hdfs namenode -bootstrapStandby    //同步命令

成功同步之后,再执行一句

hdfs --daemon start namenode

用来单独开启namenode节点,你再jps查看一下就会看到你期盼已久的namenode(从未如此想念过它)
重新格式化注意事项:
1、格式化之前,最好把所有节点上你之前格式化之后生成的文件都删除了,有一个current文件,但有的也会生成别的文件,主要是看你配置文件下的路径
2、如果你的是高可用集群,格式化之前一定要开启zookeeper和journalnode,注意是每台虚拟机下都要开启的,

zkServer.sh start 
hdfs --daemon start journalnode 

我的配置文件,供参考:
hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <!--
    The following properties are set for running HBase as a single process on a
    developer workstation. With this configuration, HBase is running in
    "stand-alone" mode and without a distributed file system. In this mode, and
    without further configuration, HBase and ZooKeeper data are stored on the
    local filesystem, in a path under the value configured for `hbase.tmp.dir`.
    This value is overridden from its default value of `/tmp` because many
    systems clean `/tmp` on a regular basis. Instead, it points to a path within
    this HBase installation directory.

    Running against the `LocalFileSystem`, as opposed to a distributed
    filesystem, runs the risk of data integrity issues and data loss. Normally
    HBase will refuse to run in such an environment. Setting
    `hbase.unsafe.stream.capability.enforce` to `false` overrides this behavior,
    permitting operation. This configuration is for the developer workstation
    only and __should not be used in production!__

    See also https://hbase.apache.org/book.html#standalone_dist
  -->
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.tmp.dir</name>
    <value>/opt/data/hbase/tmp</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
  <!--hbase在hdfs中的存放位置-->
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
  </property>
  <!--zookeeper的地址,多个地址用逗号分割-->
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>master:2181,slave1:2181,slave2:2181</value>
  </property>
  <property>
    <name>hbase.master</name>
    <value>master:16010</value>
  </property>
</configuration>

hadoop-env.sh

//上面jdk出填写jdk路径
# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/opt/app/jdk

//最后的位置加上
HDFS_NAMENODE_USER=root
HDFS_DATANODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
YARN_RESOURCEMANAGER_USER=root
YARN_NODEMANAGER_USER=root 
HDFS_ZKFC_USER=root

yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
		<description>default value is 1024</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>slave1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>master:2181,slave1:2181,slave2:2181</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

</configuration>
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值