HIVE 双机集群HA-部署

本文档详细介绍了HIVE双机高可用(HA)集群的部署过程,包括系统环境初始化、Zookeeper安装配置、Hadoop双机HA-Namenode部署、HBase双机HMaster安装和Hive双机Hive2 Server安装。每个步骤都包含了配置文件的修改、服务启动和验证等关键环节,旨在实现HIVE服务的高可用性和稳定性。
摘要由CSDN通过智能技术生成

HIVE双机HA-部署

1.1、系统环境初始化

防火墙关闭

server iptables stop

chkconfig iptables off

selinux关闭

 

用户创建:

vim yonghu.txt

hbase hdfs hive impala Impala kudu Kudu spark wxl zookeeper

 

#!/bin/sh

for i in `cat /root/hadoop-cdh/yonghu.txt`;

do

    echo $i

    useradd $i

done

 

免密码登录:其他用户更改root名称XXX

config-ssh-root.sh

#!/bin/sh

 

expect -c "

spawn ssh-keygen -t rsa

expect {

\".ssh/id_rsa): \" {send \"\r\";exp_continue }

\"Enter passphrase (empty for no passphrase): \" {send \"\r\";exp_continue }

\"Enter same passphrase again: \" {send \"\r\" }

}

expect eof

"

expect -c "

    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-master-la01

    expect {

        yes/no { send \"yes\r\"; exp_continue }

        *assword* { send \"pass@word1\r\" }

    }

    expect {

        *assword* { send \"pass@word1\r\" }

    }

expect eof

"

expect -c "

    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-master-la02

    expect {

        yes/no { send \"yes\r\"; exp_continue }

        *assword* { send \"pass@word1\r\" }

    }

    expect {

        *assword* { send \"pass@word1\r\" }

    }

expect eof

"

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-la03

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"pass@word1\r\" }

}

expect {

*assword* { send \"pass@word1\r\" }

}

expect eof

"

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub nagios_server

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"pass@word1\r\" }

}

expect {

*assword* { send \"pass@word1\r\" }

}

expect eof

"

for slave in $(</tmp/hadoop-slaves)

do

#ecl=${EXPECT_CMD/remoteHost/$slave}

#echo $ecl

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $slave

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"pass@word1\r\" }

}

expect {

*assword* { send \"pass@word1\r\" }

}

"

done

 

1.2、zookeeper安装配置:

JDK1.8配置生效

tar -zxvf jdk-8u121-linux-x64.gz -C /usr/local/

 

vim /home/hadoop/.bash_profile

 

export JAVA_HOME=/usr/local/jdk1.8.0_121

export PATH=$PATH:$JAVA_HOME/bin

 

source /home/hadoop/.bash_profile

zk-master-la01节点操作

tar -zxvf zookeeper-3.4.61.tar.gz –C /usr/local/

cd /usr/local/zookeeper-3.4.6/conf

 

配置zookeerper文件:

vim zoo.cfg

 

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/opt/zookeeper/data

clientPort=2181

server.1=zk-master-la01:2888:3888

server.2=zk-master-la02:2888:3888

server.3=slave-01:2888:3888

 

#创建data数据目录

mkdir -p /opt/zookeeper/data/

echo "1" >> /opt/zookeeper/data/myid

#scp分别传输多台节点

vim host.txt

zk-master-la01

zk-master-la02

zk-la03

slave-01

slave-02

 

for i in `cat /root/host.txt`;do echo $i;scp -r /usr/local/zookeeper-3.4.5-cdh5.10.0 root@$i:/usr/local/zookeeper-3.4.5-cdh5.10.0 ;done

 

zk-master-la02操作

#注意改下myid 2

mkdir –p /opt/zookeeper/data/

echo "2" >> /opt/zookeeper/data/myid

 

zk-la03操作

#注意改下myid 3

mkdir -p /opt/zookeeper/data/

echo "3" >> /opt/zookeeper/data/myid

 

#zk-master-la01操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

/usr/local/zookeeper-3.4.6/bin/zkServer.sh stop

//usr/local/zookeeper-3.4.6/bin/zkServer.sh status

 

#zk-master-la02操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

 

#zk-la03操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

 

1.3、部署hadoop双机HA-namenode

hadoop解压到/usr目录

tar -zxvf hadoop-2.6.0-cdh5.7.5.tar.gz -C /usr/local/

 

cd /usr/local/hadoop-2.6.0-cdh5.7.5/etc/hadoop

编辑修改内容如下:

vim core-site.xml

<?xml version="1.0" encoding="utf-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<configuration>

<property>

<na

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值