Hive0.13+Mysql5.6.23单机安装

一,安装环境
硬件:虚拟机
操作系统:Centos 6.4 64位
IP:10.51.121.10
主机名:datanode-4
安装用户:root
Hadoop:Hadoop2.6,Hadoop2.6伪分布式安装请见:http://blog.csdn.net/freedomboy319/article/details/43953731

二,安装Mysql
1,到http://dev.mysql.com/downloads/repo/yum/
下载mysql-community-release-el6-5.noarch.rpm
2, 增加mysql的Yum Repository,执行:

#yum localinstall mysql-community-release-el6-5.noarch.rpm

3,安装,#yum install mysql-community-server
4,启动mysql服务,执行:#service mysqld start

# service mysqld
Usage: /etc/init.d/mysqld {start|stop|status|restart|condrestart|try-restart|reload|force-reload}

5,查看启动状态

# service mysqld status
mysqld (pid  55977) is running...

6,修改root用户密码

#mysql -u root
mysql>use mysql;
mysql>update user set password = password('root') where user = 'root';
mysql>flush privileges;

7,设置hive用户

#mysql -uroot -proot
mysql>create user 'hive' identified by 'hive';
mysql>grant all on *.* TO 'hive'@'%' with grant option;
mysql>flush privileges;

8,新建hive_metastore数据库

#mysql -uhive -phive
mysql>create database hive_metastore;
mysql>show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive_metastore     |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.48 sec)

三,Hive安装
1,到http://mirror.bit.edu.cn/apache/hive/hive-0.13.1/apache-hive-0.13.1-bin.tar.gz 下载hive0.13。
2,解压,# tar -zxvf apache-hive-0.13.1-bin.tar.gz
这里解压到/root/hadoop目录,
则Hive的安装路径为:/root/hadoop/apache-hive-0.13.1-bin
3,配置~/.bash_profile环境变量
1)在~/.bash_profile添加如下配置:

export HIVE_HOME=/root/hadoop/apache-hive-0.13.1-bin
export PATH=$JAVA_HOME/bin$HADOOP_HOME/bin:$HIVE_HOME/bin:$PATH

2)使环境变量生效,# source ~/.bash_profile

4,设置hive-env.sh配置文件
进入/root/hadoop/apache-hive-0.13.1-bin/conf目录,执行如下命令:

# cp hive-env.sh.template hive-env.sh
# vi hive-env.sh
export HADOOP_HOME=/root/hadoop/hadoop-2.6.0
export HIVE_CONF_DIR=/root/hadoop/apache-hive-0.13.1-bin/conf

5,设置hive-site.xml配置文件
1),执行,# cp hive-default.xml.template hive-site.xml
2),编辑hive-site.xml文件,# vi hive-site.xml
hive默认为derby数据库,需要把相关信息调整为mysql数据库,修改的配置信息如下:

<property>
   <name>hive.metastore.uris</name>
    <value>thrift://datanode-4:9083</value>
    <description>Thrift URI for theremote metastore. Used by metastore client to connect to remotemetastore.</description>
  </property>
<property>
   <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://10.51.121.10:3306/hive_metastore?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string fora JDBC metastore</description>
</property>
<property>
 <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for aJDBC metastore</description>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use againstmetastore database</description>
</property>
<property>
 <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
  <description>password to use againstmetastore database</description>
</property>

6,下载mysql的JDBC驱动,并放到Hive安装路径下的lib目录下。
这里下载的mysql驱动为:mysql-connector-java-5.1.34-bin.jar,并放在/root/hadoop/apache-hive-0.13.1-bin/lib目录下。因为Hive需要通过JDBC连接Mysql数据库。

7,启动metastore和hiveserver2,其中matestore服务是Hive连接Mysql的metastore数据库用,hiveserver2服务是通过JDBC访问Hive用,JDBC的默认端口是:10000

#hive --service metastore &   
#hive --service hiveserver2 &  

8,安装验证
执行jps命令,可以看到有两个RunJar服务进程

# jps
38907 RunJar
39030 RunJar
54679 NameNode
54774 DataNode
55214 NodeManager
55118 ResourceManager
16150 Jps
54965 SecondaryNameNode

执行#ps -ef |grep RunJar 可以看到如下进程:

# ps -ef |grep RunJar
root     16165 21232  0 05:56 pts/2    00:00:00 grep RunJar
root     38907 37136  0 Jan21 pts/1    00:01:45 /usr/lib/jdk1.6.0_45/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -......org.apache.hadoop.util.RunJar /root/hadoop/apache-hive-0.13.1-bin/lib/hive-service-0.13.1.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root     39030 37136  0 Jan21 pts/1    00:01:11 /usr/lib/jdk1.6.0_45/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -......org.apache.hadoop.util.RunJar /root/hadoop/apache-hive-0.13.1-bin/lib/hive-service-0.13.1.jar org.apache.hive.service.server.HiveServer2

9,Hive日志
Hive的日志配置文件:hive-log4j.properties.template

hive.log.dir=${java.io.tmpdir}/${user.name}
hive.log.file=hive.log

这里是用root用户安装的,则hive日志在/tmp/root/目录下。

四,运行Hive
在HDFS中建立/tmp和/user/hive/warehouce目录。

$ $HADOOP_HOME/bin/hdfs dfs -mkdir /tmp
$ $HADOOP_HOME/bin/hdfs dfs -mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hdfs fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hdfs fs -chmod g+w /user/hive/warehouse

执行#hive进入Hive CLI

# hive

Logging initialized using configuration in jar:file:/root/hadoop/apache-hive-1.1.0-bin/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop/apache-hive-1.1.0-bin/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive> 

五,常见错误
1,用hive用户登录mysql时,报如下错:

# mysql -uhive -phive
Warning: Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'hive'@'localhost' (using password: YES)

解决方法:删除user表中的用户为空的记录

# mysql -uroot -proot
mysql> use mysql
mysql> select host,user from user;
+------------------+------+
| host             | user |
+------------------+------+
| %                | hive |
| 127.0.0.1        | root |
| ::1              | root |
| hadoop-node1.grc |      |
| hadoop-node1.grc | root |
| localhost        |      |
| localhost        | root |
+------------------+------+
7 rows in set (0.00 sec)
mysql> delete from user where user ='';
Query OK, 2 rows affected (0.00 sec)

mysql> select host,user from user;
+------------------+------+
| host             | user |
+------------------+------+
| %                | hive |
| 127.0.0.1        | root |
| ::1              | root |
| hadoop-node1.grc | root |
| localhost        | root |
+------------------+------+
5 rows in set (0.00 sec)

2,其他常见问题
请见:http://blog.csdn.net/freedomboy319/article/details/44828337

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值