NameNode迁移

NameNode 迁移

环境 && 背景

天翼云服务器3台服务器(1台Name Node,两台DataNode),NameNode 节点 使用命令会显示无法连接D-Bus(不知道安装Mysql哪个环节出了问题),索性直接重装下NameNode节点

  1. Hadoop 2.10.1
  2. Centos 7.6
  3. jdk 8u311
  4. 无Hive及其他组件
  5. Hdfs存在部分数据

数据备份

查看 hdfs-site.xml 文件,查看数据存放地址

<property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hadoop/app/tmp/dfs/name</value>  # namenode临时文件所存放的目录
</property>

查看 /etc/hosts配置

192.168.1.247 hadoop000  # NameNode
192.168.1.175 hadoop001
192.168.1.224 hadoop002

迁移目录到本地

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2kbr8vzv-1654531877947)(https://raw.githubusercontent.com/SunQuanmeng/imgs/master/202206/202206062200718.png)]

  1. data 是NameNode保存的数据
  2. hadoopfile是保存hadoop/etc/hadoop中的配置文件
  3. root 、shell 保存了一些之前学习用的文件和jar包
  4. profile 为/etc/profile 文件

重装云服务器

首先,在NameNode停止hadoop集群服务,在云服务器控制台关机后重装系统即可

更该主机名

vi /etc/hostname  
# 设置为hadoop000
reboot # 重启

上传备份文件

先上传data,shello,root文件夹

然后上传环境文件

设置免密登录

ssh-keygen -t rsa  # 此时只需要在更新的云服务器上更新rsa
ls .ssh/  # root 目录下
# 派发 ssh文件
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002

# 测试hadoop000 是否能免密连接hadoop001,和hadoop002
ssh hadoop001
ssh hadoop002

# 登录hadoop001和hadoop002 将id_rsa.pub文件派发到hadoop000 目录下
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop000

# 测试能否免密登录hadoop000

安装jdk

 tar -zxvf jdk-8u311-linux-i586.tar.gz  #/ hadoopfile目录下
 mv jdk1.8.0_311 jdk8
 
 # 配置环境变量
 vi /etc/profile
 

#java path
JAVA_HOME=/hadoopfile/jdk8
JAVA_BIN=/hadoopfile/jdk8/bin
JRE_HOME=/hadoopfile/jdk8/jre
PATH=$PATH:/hadoopfile/jdk8/bin:/hadoopfile/jdk8/jre/bin
CLASSPATH=/hadoopfile/jdk8/jre/lib:/hadoopfile/jdk/lib:/hadoopfile/jdk8/jre/lib/charsets.jar

export PATH=$PATH:/hadoopfile/mysql/bin/

source /etc/profile
# 测试java环境
javac -version
java -version 

安装hadoop

tar -zxvf hadoop-2.10.1.tar.gz
mv hadoop-2.10.1 hadoop

# 配置hadoop中的java环境
# 配置hadoop yarn-site.xml
# 配置hadoop core-site.xml
# 配置hadoop hdfs-site.xml

# 以上命令可以直接由之前的hadoop/etc/直接copy进去

# 配置hadoop 环境变量
vim ~/.bash_profile
    # hadoop path

    export HADOOP_HOME=/hadoopfile/hadoop
    export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
    PATH=$PATH:$HOME/bin
    export PATH
[root@hadoop000 hadoopfile]# hadoop version
Hadoop 2.10.1
Subversion https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816
Compiled by centos on 2020-09-14T13:17Z
Compiled with protoc 2.5.0
From source with checksum 3114edef868f1f3824e7d0f68be03650
This command was run using /hadoopfile/hadoop/share/hadoop/common/hadoop-common-2.10.1.jar

尝试启动集群

start-all.sh

报错

Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0 (0.0.0.0)’ can’t be established.
ECDSA key fingerprint is SHA256:S+B7DspLygG4ILOyXxR13sKg+zHRhy5CT7Ho88PCwJc.
ECDSA key fingerprint is MD5:e5:72:bb:78:a0:ff:b6:15:c0:c0:d7:f9:c7:7f:bd:0e.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added ‘0.0.0.0’ (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /hadoopfile/hadoop/logs/hadoop-root-secondarynamenode-hadoop000.out

尝试继续yes看是否能行

通过访问网站:Namenode information

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-v3vhWvEb-1654531877954)(https://raw.githubusercontent.com/SunQuanmeng/imgs/master/202206/202206062335909.png)]

查看hdfs上文件

 [root@hadoop000 hadoopfile]# hdfs -dfs ls -l 
-rw-r--r--   3 root supergroup    2769741 2022-04-12 13:02 /10000_access.log
drwxr-xr-x   - root supergroup          0 2022-04-12 13:14 /browserout
drwxrwx---   - root supergroup          0 2022-04-03 20:35 /history
drwxr-xr-x   - root supergroup          0 2022-04-01 13:21 /input
drwxr-xr-x   - root supergroup          0 2022-05-28 13:31 /log
drwxr-xr-x   - root supergroup          0 2022-04-03 19:46 /output
drwxr-xr-x   - root supergroup          0 2022-04-12 16:47 /springhdfs
-rw-r--r--   3 root supergroup         50 2022-04-01 13:16 /test
drwx------   - root supergroup          0 2022-04-03 21:12 /tmp
drwxr-xr-x   - root supergroup          0 2022-04-01 16:26 /user
drwxr-xr-x   - root supergroup          0 2022-01-17 18:42 /wordcountdemo

在hadoop002节点测试,似乎是行得通的,

[root@hadoop002 hadoop]#  hdfs -dfs ls -l
-rw-r--r--   3 root supergroup    2769741 2022-04-12 13:02 /10000_access.log
drwxr-xr-x   - root supergroup          0 2022-04-12 13:14 /browserout
drwxrwx---   - root supergroup          0 2022-04-03 20:35 /history
drwxr-xr-x   - root supergroup          0 2022-04-01 13:21 /input
drwxr-xr-x   - root supergroup          0 2022-05-28 13:31 /log
drwxr-xr-x   - root supergroup          0 2022-04-03 19:46 /output
drwxr-xr-x   - root supergroup          0 2022-04-12 16:47 /springhdfs
-rw-r--r--   3 root supergroup         50 2022-04-01 13:16 /test
drwx------   - root supergroup          0 2022-04-03 21:12 /tmp
drwxr-xr-x   - root supergroup          0 2022-04-01 16:26 /user
drwxr-xr-x   - root supergroup          0 2022-01-17 18:42 /wordcountdemo

启动jobhistory

cd /hadoopfile/hadoop/sbin
./mr-jobhistory-daemon.sh start historyserver

查看jps

[root@hadoop000 sbin]# jps
3313 Jps
2034 DataNode
611 WrapperSimpleApp
2789 NodeManager
3241 JobHistoryServer
1933 NameNode
2543 SecondaryNameNode
2687 ResourceManager

好像无法优雅的关闭hadoop,因为无法匹配pid

下次可以把sbin里面的文件也保留一份,就不用更改hadoop-daemon.sh和yarn-daemon.sh了

总结

到现在为止,似乎是行的通的,我之前觉得执行hdfs namenode -format 命令会导致错误,但没有执行该命令,且集群正常运行了,先暂时使用,保存了一个服务器镜像,如果有问题再继续更

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值