1、下载hive:wget http://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz。
2、解压hive安装文件 tar -zvxf apache-hive-1.2.1-bin.tar.gz,并将解压后的文件移动到目标路径。
3、配置mysql元数据库
3.1、启动mysqld,建立相应的mySQL账号并赋予足够权限
[root@localhosthadoop]# service mysqld start
[root@localhosthadoop]# chkconfig mysqld on //加入开机启动
mysql
mysql> createuser 'hive' identified by 'spark';
mysql> grantall privileges on *.* to 'hive'@'%' with grant option;
mysql> flushprivileges;
mysql> grantall privileges on *.* to 'hive'@'localhost' with grant option;
mysql> flushprivileges;
mysql> grantall privileges on *.* to 'hive'@'mysqlserver' with grant option;
mysql> flushprivileges;
mysql> exit;
3.2、用hive用户登录测试并创建hive数据库
[root@localhosthadoop]# mysql -h 172.16.107.9 -u hive -p
mysql> createdatabase hive;
mysql> showdatabases;
mysql> usehive
mysql> showtables;
4、配置hive环境变量,初始化hive在hdfs上的工作目录(因此在部署hive之前,请确保已经完整的部署了hadoop,并设置好相关的环境)
4.1、vi .bash_profile 添加环境变量值
exportHIVE_HOME=/home/spark/opt/hive-0.12.0
export PATH = $HIVE_HOME/bin:$PATH
source .bash_profile 使修改的环境变量立即生效
4.2、初始化hadoop 环境变量
./hadoopfs -mkdir /tmp
./hadoop fs -mkdir /home/spark/hive-warehouse
./hadoop fs -chmod g+w /tmp
./hadoop fs -chmod g+w /home/spark/hive-warehouse
对应的目录如果没有,用OS命令先建起来。
4.3、iotmp新建
新建文件夹:/home/spark/opt/apache-hive-1.2.1-bin/iotmp
并赋权限:chmod733 iotmp
4.3、 配置hive相关的配置文件:
在/home/spark/opt/apache-hive-1.2.1-bin/conf目录下:
将hive-default.xml.template改为hive-site.xml
将hive-log4j.properties.template改为hive-log4j.properties
将hive-exec-log4j.properties.template改为hive-exec-log4j.properties
文件名修改好之后,对应配置文件hive-env.sh、hive-site.xml的修改(针对mysql元数据库):
4.3.1、hive-env.sh
[root@hadoop0conf]# pwd
/home/spark/opt/apache-hive-1.2.1-bin/conf
[root@hadoop0conf]# cp hive-env.sh.template hive-env.sh
# HADOOP_HOME=${bin}/../../hadoop
HADOOP_HOME=/home/spark/opt/hadoop-2.6.0
# HiveConfiguration Directory can be controlled by:
exportHIVE_CONF_DIR=/home/spark/opt/apache-hive-1.2.1-bin/conf
4.3.2、hive-site.xml
[root@hadoop0 conf]# cphive-default.xml.template hive-site.xml
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://172.16.107.9:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>spark</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://172.16.107.9:9083</value>
</property>
5、下载驱动包(mysql-connector-java-5.1.35.tar.gz):http://dev.mysql.com/downloads/connector/j/ 或 直接wget http://mirrors.ibiblio.org/pub/mirrors/maven2/mysql/mysql-connector-java/5.1.6/mysql-connector-java-5.1.6.jar并放到hive的LIB目录(/home/spark/opt/apache-hive-1.2.1-bin/lib)。
6、启动服务并且客户端登录
执行命令:
启动metastore:hive –service metastore &
启动hiveserver:hive –service hiveserver &
执行前保证Hadoop、mysql等,已经成功启动。
cd$HIVE_HOME/bin ./hive
默认将会进入hive的控制台,执行:show tables;如果不出错,则表明默认版本的hive安装成功。
默认版本hive的metastore保存在一个叫derby的数据库的,该数据库是一个嵌入式数据库,如果同时有两个人或者多个人操作,就会报错。在此,我们以mysql作为元数据库进行安装配置的。
参考:
http://f.dataguru.cn/thread-525071-1-2.html
http://www.cnblogs.com/likehua/p/3825479.html