Hadoop集群管理

Hadoop集群管理

集群架构图例
hadoop1
<font color=0000ff>Namenode
SecondaryNamenode
ResourceManager</font>
node-0001
<font color=0000ff>DataNode
NodeManager</font>
node-0002
<font color=0000ff>DataNode
NodeManager</font>
node-0003
<font color=0000ff>DataNode
NodeManager</font>
重新初始化集群

​ 警告:该方法会丢失所有数据

​ 1、停止集群 /usr/local/hadoop/sbin/stop-all.sh
​ 2、删除所有节点的 /var/hadoop/*
​ 3、在 hadoop1 上重新格式化 /usr/local/hadoop/bin/hdfs namenode -format
​ 4、启动集群 /usr/local/hadoop/sbin/start-all.sh

[root@hadoop1 ~]# /usr/local/hadoop/sbin/stop-all.sh
[root@hadoop1 ~]# for i in hadoop1 node-{0001..0003};do
                      ssh ${i} 'rm -rf /var/hadoop/*'
                  done
[root@hadoop1 ~]# /usr/local/hadoop/bin/hdfs namenode -format
[root@hadoop1 ~]# /usr/local/hadoop/sbin/start-all.sh
增加新的节点
购买云主机
主机IP地址配置
newnode192.168.1.54最低配置2核2G
新节点安装

在 hadoop1 上执行

[root@hadoop1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.1.54
[root@hadoop1 ~]# vim /etc/hosts
192.168.1.50    hadoop1
192.168.1.51    node-0001
192.168.1.52    node-0002
192.168.1.53    node-0003
192.168.1.54    newnode
[root@hadoop1 ~]# for i in node-{0001..0003} newnode;do
                      rsync -av /etc/hosts ${i}:/etc/
                  done
[root@hadoop1 ~]# rsync -aXSH /usr/local/hadoop newnode:/usr/local/

在 newnode 节点执行

[root@newnode ~]# yum install -y java-1.8.0-openjdk-devel
[root@newnode ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode
[root@newnode ~]# /usr/local/hadoop/bin/hdfs dfsadmin -setBalancerBandwidth  500000000
[root@newnode ~]# /usr/local/hadoop/sbin/start-balancer.sh
[root@newnode ~]# /usr/local/hadoop/sbin/yarn-daemon.sh start nodemanager
[root@newnode ~]# jps
1186 DataNode
1431 NodeManager
1535 Jps

验证集群(hadoop1上执行)

[root@hadoop1 ~]# /usr/local/hadoop/bin/hdfs dfsadmin -report
... ...
-------------------------------------------------
Live datanodes (4):
[root@hadoop1 ~]# /usr/local/hadoop/bin/yarn node -list
... ...
Total Nodes:4
删除节点

配置数据迁移 hdfs-site.xml(hadoop1上做,不需要同步)

[root@hadoop1 ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
    <property>
        <name>dfs.hosts.exclude</name>
        <value>/usr/local/hadoop/etc/hadoop/exclude</value>
    </property>

配置排除主机列表,并迁移数据(hadoop1上执行)

# 在删除配置文件中添加 newnode
[root@hadoop1 ~]# echo newnode >/usr/local/hadoop/etc/hadoop/exclude
# 迁移数据
[root@hadoop1 ~]# /usr/local/hadoop/bin/hdfs dfsadmin -refreshNodes
# 查看状态,仅当节点状态为 Decommissioned 时候才可以下线
[root@hadoop1 ~]# /usr/local/hadoop/bin/hdfs dfsadmin -report

下线节点(newnode执行)

[root@newnode ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh stop datanode
[root@newnode ~]# /usr/local/hadoop/sbin/yarn-daemon.sh stop nodemanager
NFS网关
NFS网关架构图
NFS网关
HDFS集群
mount
HDFS客户端
NFS服务
namenode
datanode
datanode
datanode
客户端
购买云主机
主机IP地址配置
nfsgw192.168.1.55最低配置1核1G
HDFS用户授权

hadoop1与nfsgw都要添加用户

[root@hadoop1 ~]# groupadd -g 800 nfsuser
[root@hadoop1 ~]# useradd  -g 800 -u 800 -r -d /var/hadoop nfsuser
#----------------------------------------------------------------------------------------
[root@nfsgw ~]# groupadd -g 800 nfsuser
[root@nfsgw ~]# useradd  -g 800 -u 800 -r -d /var/hadoop nfsuser
HDFS集群授权
[root@hadoop1 ~]# vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/hadoop</value>
    </property>
    <property>
        <name>hadoop.proxyuser.nfsuser.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.nfsuser.hosts</name>
        <value>*</value>
    </property>
</configuration>
[root@hadoop1 ~]# /usr/local/hadoop/sbin/stop-all.sh
[root@hadoop1 ~]# for i in node-{0001..0003};do
                      rsync -avXSH /usr/local/hadoop/etc ${i}:/usr/local/hadoop/
                  done
[root@hadoop1 ~]# /usr/local/hadoop/sbin/start-dfs.sh
[root@hadoop1 ~]# jps
5925 NameNode
6122 SecondaryNameNode
6237 Jps
[root@hadoop1 ~]# /usr/local/hadoop/bin/hdfs dfsadmin -report
... ...
-------------------------------------------------
Live datanodes (3):
NFS网关服务
[root@nfsgw ~]# yum remove -y rpcbind nfs-utils
[root@nfsgw ~]# vim /etc/hosts
192.168.1.50    hadoop1
192.168.1.51    node-0001
192.168.1.52    node-0002
192.168.1.53    node-0003
192.168.1.55    nfsgw
[root@nfsgw ~]# yum install -y java-1.8.0-openjdk-devel
[root@nfsgw ~]# rsync -aXSH --delete hadoop1:/usr/local/hadoop /usr/local/
[root@nfsgw ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop1:50090</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.hosts.exclude</name>
        <value>/usr/local/hadoop/etc/hadoop/exclude</value>
    </property>
    <property>
        <name>nfs.exports.allowed.hosts</name>
        <value>* rw</value>
    </property>
    <property>
        <name>nfs.dump.dir</name>
        <value>/var/nfstmp</value>
    </property>
</configuration>
[root@nfsgw ~]# mkdir /var/nfstmp
[root@nfsgw ~]# chown nfsuser.nfsuser /var/nfstmp
[root@nfsgw ~]# rm -rf /usr/local/hadoop/logs/*
[root@nfsgw ~]# setfacl -m user:nfsuser:rwx /usr/local/hadoop/logs
[root@nfsgw ~]# getfacl /usr/local/hadoop/logs
[root@nfsgw ~]# cd /usr/local/hadoop/
[root@nfsgw hadoop]# ./sbin/hadoop-daemon.sh --script ./bin/hdfs start portmap
[root@nfsgw hadoop]# jps
1376 Portmap
1416 Jps
[root@nfsgw hadoop]# rm -rf /tmp/.hdfs-nfs
[root@nfsgw hadoop]# sudo -u nfsuser ./sbin/hadoop-daemon.sh --script ./bin/hdfs start nfs3
[root@nfsgw hadoop]# sudo -u nfsuser jps
1452 Nfs3
1502 Jps
mount验证
[root@newnode ~]# yum install -y nfs-utils
[root@newnode ~]# showmount -e 192.168.1.55
Export list for 192.168.1.55:
/ *
[root@newnode ~]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,noatime,sync 192.168.1.55:/ /mnt/
[root@newnode ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
192.168.1.55:/  118G   15G  104G  13% /mnt
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值