上篇hadoop-ha伪分布式平台基于yarn,后续脚本需要看这里
-
node1做mysql
-
node3做metastore server
-
node4做客户端
-
安装配置mysql(node1)
yum clean all
yum makecache
yum install mysql-server
开启mysql 并开机启动
service mysqld start
chkconfig mysqld on
进入mysql配置
use mysql;
delete from user;
grant all privileges on *.* to 'root'@'%'identified by '123'with grant option;
#给 所有权限 在所有数据库所有的表 给 root用户 来自所有的主机 密码为123
flush privileges;
#刷新权限
-
将hive的jar包通过远程发送给node3、node4,mysql驱动包发给node3
-
解压后配置环境变量(node3)
解压
tar -zxvf apache-hive-1.2.1-bin.tar.gz
mv apache-hive-1.2.1-bin /opt/home/
配置环境变量/etc/profile
export HIVE_HOME=/opt/home/apache-hive-1.2.1-bin
export PATH=$PATH:$HIVE_HOME/bin
刷新环境变量
source /etc/profile
-
修改hive配置文件
位置:/opt/home/apache-hive-1.2.1-bin/conf
cp hive-default.xml.template hive-site.xml
修改hive-site.xml
:.,$-1d
#删除到最后第二行为止
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value> #存到hdfs上地址
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123</value>
</property>
-
将node3的mysql驱动jar包放置在apache-hive-1.2.1-bin/lib中
mv mysql-connector-java-5.1.32-bin.jar /opt/home/apache-hive-1.2.1-bin/lib/
-
将node3hive scp传给node4,并配置环境变量(略)
scp -r apache-hive-1.2.1-bin/ node4:/opt/home/
-
修改node4 hive-site.xml文件,监听node3的9083端口
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node3:9083</value> #监听node3的9083端口
</property>
-
由于node4客户端需要执行hive操作
(换句话说谁需要请求server谁就需要把jline进行替换),所以需要把hive中jline的jar包拷贝到hadoop的(/opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/)目录的下,并把其中低版本的删掉,不然会报错。
cp lib/jline-2.12.jar /opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/
cd /opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/ rm -f jline-0.9.94.jar
-
启动
node3节点中
hive --service metastore
node4节点中
hive
-
修改开启脚本
#!/bin/bash
echo "start all zookeeper.."
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-all.sh
for i in {3..4};
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh start resourcemanager";
done
ssh node3 "source /etc/profile;nohup hive --service metastore >>/dev/null 2>&1 &"
#开始hive服务器端脚本静默模式
-
修改关闭脚本
#!/bin/bash
echo "stop all zookeeper.."
stop-all.sh
for i in {3..4};
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh stop resourcemanager";
done
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh stop";
done
ssh node3 "source /etc/profile;jps |grep RunJar|awk '{print \$1}'|xargs kill -9"
#ssh 远程根据名字杀死进程