5节点Hadoop,Hive,HBase HA 集群搭建

节点规划host节点进程node1namenode,datanode,HRegionServer,NodeManager,DFSZKFailoverControllernode2secondarynamenode,datanode,zookeeper,NodeManager,JournalNode,ResourceManagernode3namenode,d...
摘要由CSDN通过智能技术生成

节点规划

host 节点进程
node1 namenode,datanode,HRegionServer,NodeManager,DFSZKFailoverController
node2 secondarynamenode,datanode,zookeeper,NodeManager,JournalNode,ResourceManager
node3 namenode,datanode,HRegionServer,NodeManager,DFSZKFailoverController
node4 zookeeper,datanode, HMaster,HRegionServer,NodeManager,JournalNode
node5 zookeeper,datanode,HMaster, HRegionServer,NodeManager,JournalNode,ResourceManager

基础环境搭建

配置host文件

192.168.234.100 node1
192.168.234.101 node2
192.168.234.102 node3
192.168.234.103 node4
192.168.234.104 node5

配置node1,node3 到 集群的免密登录

ssh-keygen -t rsa #三个回车
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3
ssh-copy-id node4
ssh-copy-id node5

jdk 安装

#在node1 将jdk解压到/opt/jdk 目录下 配置 /etc/profile

export JAVA_HOME=/opt/jdk/jdk1.8.0_191
export CLASSPATH=$JAVA_HOME/lib/
export PATH=$PATH:$JAVA_HOME/bin

#通过scp 将node1 的java 环境拷贝到  其余的机器上

scp -r /opt/jdk node2:/opt/jdk
scp -r /opt/jdk node3:/opt/jdk
scp -r /opt/jdk node4:/opt/jdk
scp -r /opt/jdk node5:/opt/jdk

scp -r /etc/profile node2:/etc/profile
scp -r /etc/profile node3:/etc/profile
scp -r /etc/profile node4:/etc/profile
scp -r /etc/profile node5:/etc/profile

zookeeper 安装

#在node4将zookeeper解压到/opt/zookeeper/ 
#配置zoo.cfg
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/opt/zookeeper/zookeeper-3.4.13/data
    dataLogDir=/opt/zookeeper/zookeeper-3.4.13/logs
    autopurge.snapRetainCount=500
    autopurge.purgeInterval=24
    clientPort=2181
    server.1=node2:2888:3888
    server.2=node4:2888:3888
    server.3=node5:2888:3888

#在node2 执行
echo "1" > /opt/zookeeper/zookeeper-3.4.13/data/myid
#在node4 执行
echo "2" > /opt/zookeeper/zookeeper-3.4.13/data/myid
#在node5 执行
echo "3" > /opt/zookeeper/zookeeper-3.4.13/data/myid

hadoop 集群安装

在node1解压hadoop安装包到 /opt/hadoop/hadoop-2.7.7

hdfs-site.xml 配置

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
     <property>
  		<name>dfs.nameservices</name>
  		<value>mycluster</value>
    </property>
    <property>
  		<name>dfs.ha.namenodes.mycluster</name>
  		<value>nn1,nn2</value>
	</property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>node1:9000</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>node3:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>node1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>node3:50070</value>
    </property>
    <!--三台journalNodes 节点ip 和 名称服务名 -->
    <property>
       <name>dfs.namenode.shared.edits.dir</name>
       <value>qjournal://node2:8485;node4:8485;node5:8485/mycluster</value>
    </property>
     <!--配置名称服务mycluster的容灾代付服务类,不需要修改-->
    <property>
  		<name>dfs.client.failover.proxy.provider.mycluster</name>
  		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
     <!--容灾发生时,用于傻晒active node,确保只有一个active态node-->
     <!-- shell(/bin/true)确保防护方法最后返回成功 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/root/.ssh/id_rsa</value>
    </property>
    <!-- journal node编辑日志目录-->
    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/opt/hadoop/hadoop-2.7.7/data/journal/data</value>
    </property>
    <!--启动自动容灾-->
    <property>
        <name
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值