1:安装计划
hadoop1 192.168.100.171 (hadoop2.2.0 namenode)
hadoop3 192.168.100.173 (hadoop2.2.0 datanode)
hadoop4 192.168.100.174 (hadoop2.2.0 datanode)
hadoop5 192.168.100.175 (hadoop2.2.0 datanode)
hadoop2 192.168.100.172 (hive0.11.0 client)
hadoop9 192.168.100.179 (MySQL server + hive metastore service)
hadoop2.2.0的安装目录/app/hadoop/hadoop220
hive 0.11.0的安装目录/app/hadoop/hive011
2:安装和配置hadoop9(MySQL server + hive metastore service)
A:安装配置MySQL
MySQL的安装参见本人博客mysql5.6.12 for Linux安装
[root@hadoop9 hadoop]# mysql -uroot -p
mysql> grant all on *.* to mysql@'%' identified by 'mysql' with grant option;
mysql> create user 'hadoop' identified by 'hadoop';
mysql> grant all on *.* to hadoop@'%' with grant option;
mysql> quit;
[root@hadoop9 hadoop]# mysql -uhadoop -p
mysql> create database hive;
mysql> quit;
B:安装hive 0.11.0
[root@hadoop9 soft]# tar zxf hive-0.11.0.tar.gz
[root@hadoop9 soft]# cp -r hive-0.11.0 /app/hadoop/hive011
[root@hadoop9 soft]# cd /app/hadoop
[root@hadoop9 hadoop]# chown -R hadoop:hadoop hive011
C:配置环境变量
[root@hadoop9 hadoop]# vi /etc/profile
export HIVE_HOME=/app/hadoop/hive011
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:${HIVE_HOME}/bin:$PATH
[root@hadoop9 hadoop]# source /etc/profile
D:hive配置
[root@hadoop9 hadoop]# cd hive/conf
[hadoop@hadoop9 conf]$ cp hive-default.xml.template hive-site.xml
[hadoop@hadoop9 conf]$ cp hive-env.sh.template hive-env.sh
[hadoop@hadoop9 conf]$ vi hive-env.xml
HADOOP_HOME=/app/hadoop/hadoop220
[hadoop@hadoop9 conf]$ vi hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop9:3306/hive?=createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hadoop</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
<description>password to use against metastore database</description>
</property>
e:安装MySQL的JDBC驱动包
从MySQL官网下载合适的JDBC驱动包,找出合适的文件并复制到${HIVE_HOME}/lib下
[hadoop@hadoop9 hive011]$ cp ../mysql-connector-java-5.1.26-bin.jar lib/mysql-connector-java-5.1.26-bin.jar
3:安装hadoop2(hive0.11.0 client)
A:配置环境变量
[root@hadoop2 hadoop]# vi /etc/profile
export HIVE_HOME=/app/hadoop/hive0121
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:${HIVE_HOME}/bin:$PATH
[root@hadoop2 hadoop]# source /etc/profile
B:从hadoop9复制hive 0.11.0的安装文件
[hadoop@hadoop9 hadoop]$ scp -r hive011 hadoop@hadoop2:/app/hadoop/
C:配置hive-site.xml
[hadoop@hadoop2 conf]$ vi hive-site.xml
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop9:9083</value>
<description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
4:启动系统进行测试
A:在hadoop1启动hadoop
[hadoop@hadoop hadoop220]$ sbin/start-all.sh
B:在hadoop9启动hive,测试本地连接MySQL
[hadoop@hadoop9 hive011]$ bin/hive
C:在hadoop9启动metastore,测试远程连接MySQL
[hadoop@hadoop9 hive011]$ bin/hive --service metastore
[hadoop@hadoop2 hive011]$ bin/hive
5:web方式测试
[hadoop@hadoop9 hive011]$ cp /usr/java/jdk1.7.0_21/lib/tools.jar lib/tools.jar
[hadoop@hadoop9 hive011]$ bin/hive --service hwi
客户端打开web操作界面http://hadoop9:9999/hwi
6:Tips
A:安装hive的时候建议使用hive 0.11.0版,最新版hive 0.12.0配置模板文件本身有错误,另外无论连接hadoop1.2.0还是hadoop2.2.0都有问题。
B:客户端hive连接metastore的时候,只需要修改参数hive.metastore.uris,不需要设置MySQL连接所需要的参数。该参数对应老版本hive.metastore.local参数。
C:在使用hive运行的时候,如果想看到更多的信息,可以用debug方式,命令如下:
[hadoop@hadoop9 hive011]$ bin/hive -hiveconf hive.root.logger=DEBUG,console
D:metastore服务可以以后台服务方式启动,这样不需要单独占一个终端
[hadoop@hadoop9 hive011]$ nohup bin/hive --service metastore > metastore.log 2>&1 &
E:hwi服务也可以以后台服务方式启动
[hadoop@hadoop9 hive011]$ nohup bin/hive --service hwi > hwi.log 2>&1 &
F:hwi访问出现如下的错误,应该是tools.jar找不到引起,直接复制到${HIVE_HOME}/lib下即可。
[hadoop@hadoop9 hive011]$ cp /usr/java/jdk1.7.0_21/lib/tools.jar lib/tools.jar
G:hwi访问同样是HTTP ERR 500 java.lang.IllegalStateException: No Java compiler available错误,应该是没有安装ant引起的,再安装ant后,在/etc/profile里增加ANT_HOME和ANT_LIB环境变量即可。注意,ANT_LIB是必须的。
参考文档:
https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin
hadoop1 192.168.100.171 (hadoop2.2.0 namenode)
hadoop3 192.168.100.173 (hadoop2.2.0 datanode)
hadoop4 192.168.100.174 (hadoop2.2.0 datanode)
hadoop5 192.168.100.175 (hadoop2.2.0 datanode)
hadoop2 192.168.100.172 (hive0.11.0 client)
hadoop9 192.168.100.179 (MySQL server + hive metastore service)
hadoop2.2.0的安装目录/app/hadoop/hadoop220
hive 0.11.0的安装目录/app/hadoop/hive011
2:安装和配置hadoop9(MySQL server + hive metastore service)
A:安装配置MySQL
MySQL的安装参见本人博客mysql5.6.12 for Linux安装
[root@hadoop9 hadoop]# mysql -uroot -p
mysql> grant all on *.* to mysql@'%' identified by 'mysql' with grant option;
mysql> create user 'hadoop' identified by 'hadoop';
mysql> grant all on *.* to hadoop@'%' with grant option;
mysql> quit;
[root@hadoop9 hadoop]# mysql -uhadoop -p
mysql> create database hive;
mysql> quit;
B:安装hive 0.11.0
[root@hadoop9 soft]# tar zxf hive-0.11.0.tar.gz
[root@hadoop9 soft]# cp -r hive-0.11.0 /app/hadoop/hive011
[root@hadoop9 soft]# cd /app/hadoop
[root@hadoop9 hadoop]# chown -R hadoop:hadoop hive011
C:配置环境变量
[root@hadoop9 hadoop]# vi /etc/profile
export HIVE_HOME=/app/hadoop/hive011
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:${HIVE_HOME}/bin:$PATH
[root@hadoop9 hadoop]# source /etc/profile
D:hive配置
[root@hadoop9 hadoop]# cd hive/conf
[hadoop@hadoop9 conf]$ cp hive-default.xml.template hive-site.xml
[hadoop@hadoop9 conf]$ cp hive-env.sh.template hive-env.sh
[hadoop@hadoop9 conf]$ vi hive-env.xml
HADOOP_HOME=/app/hadoop/hadoop220
[hadoop@hadoop9 conf]$ vi hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop9:3306/hive?=createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hadoop</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
<description>password to use against metastore database</description>
</property>
e:安装MySQL的JDBC驱动包
从MySQL官网下载合适的JDBC驱动包,找出合适的文件并复制到${HIVE_HOME}/lib下
[hadoop@hadoop9 hive011]$ cp ../mysql-connector-java-5.1.26-bin.jar lib/mysql-connector-java-5.1.26-bin.jar
3:安装hadoop2(hive0.11.0 client)
A:配置环境变量
[root@hadoop2 hadoop]# vi /etc/profile
export HIVE_HOME=/app/hadoop/hive0121
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:${HIVE_HOME}/bin:$PATH
[root@hadoop2 hadoop]# source /etc/profile
B:从hadoop9复制hive 0.11.0的安装文件
[hadoop@hadoop9 hadoop]$ scp -r hive011 hadoop@hadoop2:/app/hadoop/
C:配置hive-site.xml
[hadoop@hadoop2 conf]$ vi hive-site.xml
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop9:9083</value>
<description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
4:启动系统进行测试
A:在hadoop1启动hadoop
[hadoop@hadoop hadoop220]$ sbin/start-all.sh
B:在hadoop9启动hive,测试本地连接MySQL
[hadoop@hadoop9 hive011]$ bin/hive
C:在hadoop9启动metastore,测试远程连接MySQL
[hadoop@hadoop9 hive011]$ bin/hive --service metastore
5:web方式测试
[hadoop@hadoop9 hive011]$ cp /usr/java/jdk1.7.0_21/lib/tools.jar lib/tools.jar
[hadoop@hadoop9 hive011]$ bin/hive --service hwi
客户端打开web操作界面http://hadoop9:9999/hwi
6:Tips
A:安装hive的时候建议使用hive 0.11.0版,最新版hive 0.12.0配置模板文件本身有错误,另外无论连接hadoop1.2.0还是hadoop2.2.0都有问题。
B:客户端hive连接metastore的时候,只需要修改参数hive.metastore.uris,不需要设置MySQL连接所需要的参数。该参数对应老版本hive.metastore.local参数。
C:在使用hive运行的时候,如果想看到更多的信息,可以用debug方式,命令如下:
[hadoop@hadoop9 hive011]$ bin/hive -hiveconf hive.root.logger=DEBUG,console
D:metastore服务可以以后台服务方式启动,这样不需要单独占一个终端
[hadoop@hadoop9 hive011]$ nohup bin/hive --service metastore > metastore.log 2>&1 &
E:hwi服务也可以以后台服务方式启动
[hadoop@hadoop9 hive011]$ nohup bin/hive --service hwi > hwi.log 2>&1 &
F:hwi访问出现如下的错误,应该是tools.jar找不到引起,直接复制到${HIVE_HOME}/lib下即可。
[hadoop@hadoop9 hive011]$ cp /usr/java/jdk1.7.0_21/lib/tools.jar lib/tools.jar
参考文档:
https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin