下载路径:http://apache.fayea.com/hive/stable/
解压:tar -zxvf apache-hive-1.2.0-bin.
配置环境变量:
1、修改文件
/usr/local/hive/bin/hive-config.sh
在该文件末尾加入:(env可查看所有环境变量)
export JAVA_HOME=/usr/java/jdk1.7.0_71
export HIVE_HOME=/home/hadoop/apache-hive-1.2.0-bin
export HADOOP_HOME=/home/hadoop/hadoop-1.2.1
2、复制配置文件
进入解压后的hive目录,进入conf
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
cp hive-log4j.properties.template hive-log4j.properties
3、修改hive-env.sh
配置hive-env.sh 添加hadoop_home路径:
将export HADOOP_HOME前面的‘#’号去掉,
并让它指向您所安装hadoop的目录 (就是切换到这个目录下有hadoop的conf,lib,bin 等文件夹的目录),
( HADOOP_HOME=/home/hadoop/hadoop-1.2.1)
其实在安装hive时需要指定HADOOP_HOME的原理基本上与
在安装Hadoop时需要指定JAVA_HOME的原理是相类似的。
Hadoop需要java作支撑,而hive需要hadoop作为支撑。
将 export HIVE_CONF_DIR=/home/hadoop/apache-hive-1.2.0-bin/conf
,并且把#号去掉
将export HIVE_AUX_JARS_PATH=/home/hadoop/apache-hive-1.2.0-bin/lib
4、修改hive-site.xml 文件
在/hdfs新建相应的文件夹/home/hadoop/apache-hive-1.2.0-bin/warehouse,
按如下所示更改相应的配置文件
1) <property>
<name>hive.exec.local.scratchdir</name>
<value>/home/hadoop/apache-hive-1.2.0-bin/warehouse</value>
<description>Local scratch space for Hive jobs</description>
</property>
2)
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/home/hadoop/apache-hive-1.2.0-bin/log/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
3)
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/hadoop/apache-hive-1.2.0-bin/resource/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
4)<property>
<name>hive.metastore.warehouse.dir</name>
<value>/home/hadoop/apache-hive-1.2.0-bin/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
5) <property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
6)<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/hadoop/apache-hive-1.2.0-bin/tmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
5、修改日志目录
修改: hive-log4j.proprties文件
hive.log.dir=/home/hadoop/apache-hive-1.2.0-bin/log
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
至此,已经安装完毕。
1、启动hive
使用hadoop用户登陆,执行./bin/hive
2、测试hive
hive> create TABLE t( id int, name string); hive> SHOW TABLES; hive> select * from t; hive> drop table t;