HIVE 安装

综合整理自:

https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration

http://blog.csdn.net/zouxinfox/article/details/5901391

http://www.mazsoft.com/blog/post/2010/02/01/Setting-up-HadoopHive-to-use-MySQL-as-metastore


1 Hadoop 要装好


2 hive 安装

(1) 从官网下载hive,现在已经有1.0了。

(2) 在/etc/profile增加环境变量

export HIVE_HOME=$HADOOP_HOME/hive

export PATH=$HIVE_HOME/bin:$PATH

(3) HIVE_HOME/conf/hive-env.sh中弄一份

(4)hadoop中建立目录

$$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp

$$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse

$$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp

$$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse

这时已经可以通过命令行(CLI)使用HIVE了。但是这种方式只支持单用户,多用来测试。在实际应用中,往往要将HIVE的元数据(schemal)存入Mysql中。这样就可以支持多用户了。

3 mysql 安装

In a previous post I showed how to setup Hadoop/Hive to use Derby in server mode as the metastore. Many believe MySQL is a better choice for such purpose, so here I'm going to show how we can configure our cluster which we created previously to use a MySQL server as the metastore for Hive.

First we need to install MySQL. In this scenario, I'm going to install MySQL on our Master node, which is named centos1.

When logged in as root user:

yum install mysql-server

Now make sure MySQL server is started:

/etc/init.d/mysqld start

Next, I'm going to create a new MySQL user for hadoop/hive:

mysql
mysql> CREATE USER 'hadoop'@'centos1' IDENTIFIED BY 'hadoop';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hadoop'@'centos1' WITH GRANT OPTION;
mysql> exit
To make sure this new user can connect to MySQL server, switch to user hadoop:
 
su - hadoop
mysql -h centos1 -u hadoop -p
 
We need to change the hive configuration so it can use MySQL:
 
nano /hadoop/hive/conf/hive-site.xml
 
and new configuration values are:
 
<property>
  <name>hive.metastore.local</name>
  <value>true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://centos1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hadoop</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hadoop</value>
</property>
 

Some of the above parameters do not match what we did to setup derby server in previous post, so I decided to delete the jpox.properties file:

rm /hadoop/hive/conf/jpox.properties
hive needs to have the MySQL jdbc drivers, so we need to download and copy it to hive/lib folder:
 
cd /hadoop
wget http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.11.tar.gz/from/http://mysql.he.net/
tar -xvzf mysql-connector-java-5.1.11.tar.gz
cp mysql-connector-java-5.1.11/*.jar /hadoop/hive/lib
 
To make sure all settings are done correctly, we can do this:
 
cd /hadoop/hive
bin/hive
hive> show tables;




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值