一、配置zookeeper
1.分别在4台服务器上初始化ID(id=1,2,3,4)
根据实际服务器台数而定ID
echo ID > /var/lib/zookeeper/myid
[root@master1 ~]# echo 1 > /var/lib/zookeeper/myid
[root@master2 ~]# echo 2 > /var/lib/zookeeper/myid
[root@slave1 ~]# echo 3 > /var/lib/zookeeper/myid
[root@slave2 ~]# echo 4 > /var/lib/zookeeper/myid
2.在4台zookeeper服务器上进行配置
[root@master1 ~]# vim /etc/zookeeper/conf/zoo.cfg
添加:
server.1=master1:2888:3888
server.2=master2:2888:3888
server.3=slave1:2888:3888
server.4=slave2:2888:3888
复制给其他服务器操作:
[root@master1 ~]# scp /etc/zookeeper/conf/zoo.cfg root@master2:/etc/zookeeper/conf/zoo.cfg
[root@master1 ~]# scp /etc/zookeeper/conf/zoo.cfg root@slave1:/etc/zookeeper/conf/zoo.cfg
[root@master1 ~]# scp /etc/zookeeper/conf/zoo.cfg root@slave2:/etc/zookeeper/conf/zoo.cfg
3.启动服务
[root@master1 ~]# systemctl start zookeeper-server
[root@master2 ~]# systemctl start zookeeper-server
[root@slave1 ~]# systemctl start zookeeper-server
[root@slave2 ~]# systemctl start zookeeper-server
4.创建hive本地所需目录
[root@master1 ~]# mkdir metastore
[root@master2 ~]# mkdir metastore
[root@slave1 ~]# mkdir metastore
[root@slave2 ~]# mkdir metastore
二、配置hive2
1.添加配置项
[root@master1 ~]# vim /etc/hive2/conf/hive-site.xml
添加修改:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://slave1:3306/metastore</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>Schema Table</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://slave1:9083</value>
</property>
<property>
<name>hice.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>master1,master2,slave1,slave2</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>
复制给其他服务器
[root@master1 ~]# scp /etc/hive2/conf/hive-site.xml root@master2:/etc/hive2/conf/hive-site.xml
[root@master1 ~]# scp /etc/hive2/conf/hive-site.xml root@slave1:/etc/hive2/conf/hive-site.xml
[root@master1 ~]# scp /etc/hive2/conf/hive-site.xml root@slave2:/etc/hive2/conf/hive-site.xml
三、准备HDFS目录
创建hive数据仓库
[root@master1 ~]# su hdfs
bash-4.2$ hadoop fs -mkdir /tmp
bash-4.2$ hadoop fs -chmod 1777 /tmp
bash-4.2$ hadoop fs -mkdir -p /user/hive/warehouse
bash-4.2$ hadoop fs -chown -R hive:hive /user/hive
bash-4.2$ hadoop fs -chmod 1777 /user/hive/warehouse
四、启动mysql服务器
1.在slave1上启动mysql
[root@slave1 ~]# systemctl start mysqld
2.连接mysql数据库
[root@slave1 ~]# mysql
或者输入:
[root@slave1 ~]# mysql -u root -h localhost
五、创建hive数据库账号
mysql> create database metastore;
Query OK, 1 row affected (0.15 sec)
mysql> use metastore
Database changed
mysql> grant all on metastore.* to 'hiveuser'@'%' identified by 'password';
Query OK, 0 rows affected (1.01 sec)
六、初始化metastore
1.清空可能存在的垃圾文件
[root@slave1 ~]# rm -rf /var/lib/hive2/*
2.拷贝需要的mysql-connector-java.jar到/usr/had/2.6.3.0-235/hive2/lib中
3.在slave1上创建metastore
[root@slave1 ~]# cd /usr/hdp/2.6.3.0-235/hive2
[root@slave1 hive2]# ls
bin conf doc jdbc lib man metastore scripts
[root@slave1 hive2]# cd bin
[root@slave1 bin]# ./schematool -dbType mysql -initSchema
Initialization script hive-schema-2.1.2000.mysql.sql
Initialization script completed
schemaTool completed
4.在mysql控制台确认建表成功
[root@slave1 ~]# mysql
mysql> show tables;
+---------------------------+
| Tables_in_metastore |
+---------------------------+
57 rows in set (0.00 sec)
七、修改core-site.xml
[root@slave1 ~]# vim /etc/hadoop/conf/core-site.xml
添加:
<property>
<name>hadoop.proxyuser.hdfs.groups</name>
<value>hive</value>
</property>
<property>
<name>hadoop.proxyuser.hdfs.hosts</name>
<value>master1,master2,slave1,slave2,127.0.0.1,localhost</value>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authenticator.manager,hive.security.authorization.manager,**hive.users.in.admin.role**</value>
</property>
复制给其他:
[root@slave1 ~]# scp /etc/hadoop/conf/core-site.xml root@master1:/etc/hadoop/conf/core-site.xml
[root@slave1 ~]# scp /etc/hadoop/conf/core-site.xml root@master2:/etc/hadoop/conf/core-site.xml
[root@slave1 ~]# scp /etc/hadoop/conf/core-site.xml root@slave2:/etc/hadoop/conf/core-site.xml
修改完,重启hadoop服务
资料:
启动master1节点上的服务:
[root@master1 ~]# systemctl start hadoop-hdfs-namenode
[root@master1 ~]# systemctl start hadoop-hdfs-datanode
启动master2节点上的服务:
[root@master2 ~]# systemctl start hadoop-hdfs-datanode
[root@master2 ~]# systemctl start hadoop-hdfs-secondarynamenode
启动slave1、slave2节点上的服务:
[root@slave1 ~]# systemctl start hadoop-hdfs-datanode
[root@slave2 ~]# systemctl start hadoop-hdfs-datanode
在master2上开启resourcemanager:
[root@master2 ~]# systemctl start hadoop-yarn-resourcemanager
访问web后台master2:8088
在slave1、slave2上开启historyserver
[root@slave1 ~]# systemctl start hadoop-mapreduce-historyserver
[root@slave2 ~]# systemctl start hadoop-mapreduce-historyserver
在所有启动datanode的节点上开nodemanager
[root@slave2 ~]# systemctl start hadoop-yarn-nodemanager
八、启动hive server2服务
1.开启远程metastore
[root@slave1 ~]# su - hdfs
上一次登录:二 7月 17 14:50:32 CST 2018pts/5 上
-bash-4.2$/usr/hdp/2.6.3.0-235/hive2/bin/hive --service metastore
Starting Hive Metastore Server
2.启动hive server
-bash-4.2$ /usr/hdp/2.6.3.0-235/hive2/bin/hiveserver2
3.连接服务器
bash-4.2$ /usr/hdp/2.6.3.0-235/hive2/bin/beeline -u jdbc:hive2://slave1:10000 -n hive
九.验证hive数据录入