搭建Hive 2.1.1 基于Hadoop 2.6.1 和 Ubuntu 16.0.4 记录

原创 2018年04月16日 10:02:51

Hadoop Hive Hbase 对应版本

此处输入图片的描述

Hive官网下载

此处输入图片的描述

我们以Hadoop版本作为参考适配Hive Hbase即可, Hadoop版本是2.6.1 所以可以选择Hive1.2.1以上版本,HBase-1.2.x 至 HBase-2.0.x即可.

配置SQL

  • 下载sql
mutex@mutex-dl:~$ sudo apt-get install mysql-server  mysql-client
  • 启动sql
mutex@mutex-dl:~$ sudo /etc/init.d/mysql start      (Ubuntu版本)     
* Starting MySQL database server mysqld [ OK ]

sudo /etc/init.d/mysql restart – 重启sql
sudo /etc/init.d/mysql stop – 停止sql

  • 新建hive用户
    此处输入图片的描述

下载Hive

mutex@mutex-dl:~$ su - hadoop
密码: 
hadoop@mutex-dl:~$ cd /tmp
hadoop@mutex-dl:/tmp$ wget http://www-us.apache.org/dist/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz

hadoop@mutex-dl:/tmp$ sudo tar xvzf apache-hive-2.1.1-bin.tar.gz -C /usr/local

请注意,这里一开始就切换到hadoop用户环境,中间不要再切换用户,防止在配置过程中出现用户权限问题

另外如果上面的链接拒绝访问,那么请用前面提供的Hive官网自行下载,并解压到 /usr/local目录下即可

Hive环境变量

打开~/.bashrc 并配置如下:

export HIVE_HOME=/usr/local/apache-hive-2.1.1-bin
export HIVE_CONF_DIR=/usr/local/apache-hive-2.1.1-bin/conf
export PATH=$HIVE_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
export CLASSPATH=$CLASSPATH:/usr/local/apache-hive-2.1.1-bin/lib/*:.

然后命令行使之生效

hadoop@mutex-dl:/tmp$ source ~/.bashrc

新建Hive warehouse目录

hadoop@mutex-dl:/tmp$ echo $HADOOP_HOME
/usr/local/hadoop

显示Hadoop-home配置的真是目录,然后再新建目录以供以后Hive使用

hadoop@mutex-dl:~$ hdfs dfs -ls /
drwxr-xr-x   - hduser supergroup          0 2018-1-23 11:17 /hbase
drwx------   - hduser supergroup          0 2018-1-18 16:04 /tmp
drwxr-xr-x   - hduser supergroup          0 2018-1-18 09:13 /user

hadoop@mutex-dl:~$ hdfs dfs -mkdir /user/hive/warehouse
hadoop@mutex-dl:~$ hdfs dfs -chmod g+w /tmp
hadoop@mutex-dl:~$ hdfs dfs -chmod g+w /user/hive/warehouse

hadoop@mutex-dl:~$ hdfs dfs -ls /
drwxr-xr-x   - hduser supergroup          0 2018-1-23 11:17 /hbase
drwx-w----   - hduser supergroup          0 2018-1-18 16:04 /tmp
drwxr-xr-x   - hduser supergroup          0 2018-1-23 17:18 /user

hadoop@mutex-dl:~$ hdfs dfs -ls /user
drwxr-xr-x   - hduser supergroup          0 2018-1-18 23:17 /user/hduser
drwxr-xr-x   - hduser supergroup          0 2018-1-23 17:18 /user/hive

配置Hive

hadoop@mutex-dl:~$ cd $HIVE_HOME/conf
hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/conf$ sudo cp hive-env.sh.template hive-env.sh

编辑hive-env.sh,新加入配置

export HADOOP_HOME=/usr/local/hadoop

下载Apache Derby

hadoop@mutex-dl:~$ cd /tmp

hadoop@mutex-dl:/tmp$ wget http://archive.apache.org/dist/db/derby/db-derby-10.13.1.1/db-derby-10.13.1.1-bin.tar.gz

hadoop@mutex-dl:/tmp$ sudo tar xvzf db-derby-10.13.1.1-bin.tar.gz -C /usr/local

如果wget拒绝访问,直接手动在浏览器中下载即可.

继续配置Derby,打开~/.bashrc,并新增以下内容

export DERBY_HOME=/usr/local/db-derby-10.13.1.1-bin
export PATH=$PATH:$DERBY_HOME/bin
export CLASSPATH=$CLASSPATH:$DERBY_HOME/lib/derby.jar:$DERBY_HOME/lib/derbytools.jar

别忘记 随后用命令行 source ~/.bashrc使之随机生效.

再创建一个data目录

hadoop@mutex-dl:/tmp$ sudo mkdir $DERBY_HOME/data

配置 Hive Metastore

hadoop@mutex-dl:/tmp$ cd $HIVE_HOME/conf

hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/conf$ sudo cp hive-default.xml.template hive-site.xml

编辑hive-site.xml,并新增以下内容

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:derby:;databaseName=metastore_db;create=true</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>

/usr/local/apache-hive-2.1.1-bin/conf当前目录下新建文件jpox.properties,添加以下内容

javax.jdo.PersistenceManagerFactoryClass =

org.jpox.PersistenceManagerFactoryImpl
org.jpox.autoCreateSchema = false
org.jpox.validateTables = false
org.jpox.validateColumns = false
org.jpox.validateConstraints = false
org.jpox.storeManagerType = rdbms
org.jpox.autoCreateSchema = true
org.jpox.autoStartMechanismMode = checked
org.jpox.transactionIsolation = read_committed
javax.jdo.option.DetachAllOnCommit = true
javax.jdo.option.NontransactionalRead = true
javax.jdo.option.ConnectionDriverName = org.apache.derby.jdbc.ClientDriver
javax.jdo.option.ConnectionURL = jdbc:derby://hadoop1:1527/metastore_db;create = true
javax.jdo.option.ConnectionUserName = APP
javax.jdo.option.ConnectionPassword = mine

然后修改apache-hive-2.1.1-bin整个目录的所有者

hadoop@mutex-dl:/usr/local$ sudo chown -R hadoop:hadoop apache-hive-2.1.1-bin

初始化 Metastore schem


hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ schematool -dbType derby -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :    org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Starting metastore schema initialization to 2.1.1
Initialization script hive-schema-2.1.1.derby.sql
Initialization script completed
schemaTool completed

验证是否安装Hive成功

hadoop@mutex-dl:~$ echo $HIVE_HOME
/usr/local/apache-hive-2.1.0-bin

hadoop@mutex-dl:~$ $HIVE_HOME/bin/hive
  • 可能出现的问题1

Exception in thread “main” java.lang.RuntimeException: Couldn’t create directory system:java.io.tmpdir/{hive.session.id}_resources

解决方法: 修改hive-site.xml, 将原先的system:java.io.tmpdir这一行配置注释掉,然后修改为以下所示/user/hive/tmp/

<property>
    <name>hive.downloaded.resources.dir</name>
    <!--
    <value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
    -->
    <value>/user/hive/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>

尤其要注意,这里的/user/hive/tmp/,并非是HDFS文件系统下面的,而是hadoop用户的本地目录,本人一开始认为是HDFS文件系统下的目录,造成没有权限新建目录,问题和StackOverflow上面这个问题一样,想看具体细节的可以参考

此处输入图片的描述

  • 可能出现的问题2

java.net.URISyntaxException: Relative path in absolute URI: {system:java.io.tmpdir%7D/%7Bsystem:user.name%7D

解决方法:修改 hive-site.xml,将下面注释的行修改为/tmp/mydir,这个也是Hadoop用户的本地目录,而非HDFS文件系统目录.

<property>
    <name>hive.exec.local.scratchdir</name>
    <!--
    <value>${system:java.io.tmpdir}/${system:user.name}</value>
    -->
    <value>/tmp/mydir</value>
    <description>Local scratch space for Hive jobs</description>
  </property>

Hive CLI:

修改完以上的问题,我们就可以启动Hive了

hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show tables;
OK
Time taken: 4.603 seconds

此处输入图片的描述

其他错误

如果在 initSchema 的时候出现以下错误

hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :    org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Error: Syntax error: Encountered "<EOF>" at line 1, column 64. (state=42X01,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

解决的办法就是删除所有目录下的metastore_db目录,典型的如:

/usr/local/apache-hive-2.1.1-bin/bin
/usr/local/apache-hive-2.1.1-bin/conf
/usr/local/apache-hive-2.1.1-bin/scripts/metastore/upgrade/mysql

删除以上目录以后,再执行schematool -dbType mysql -initSchema即可.

附注

参考文章: Apache Hadoop : Hive 2.1.0 install on Ubuntu 16.04

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Linux1s1s/article/details/79956862

EJB2.0中什么时候用local interface,什么时候用remote interface

EJB2.0中什么时候用local interface,什么时候用remote interfacelocal interface是EJB2.0的新特性,它让你无需反复的网络通信就可以存取你的EJB组件...
  • xxcc
  • xxcc
  • 2001-12-07 10:58:00
  • 1315

ubuntu 虚拟机 完全分布式 hadoop集群搭建 hive搭建 ha搭建

针对分布式hadoop集群搭建,已经在四台虚拟机上,完全搭建好,这里针对整个搭建过程以及遇到的问题做个总结,按照下面的做法应该能够比较顺畅的搭建一套高可用的分布式hadoop集群。 这一系列分布式组件...
  • u013676711
  • u013676711
  • 2016-11-16 12:00:45
  • 685

hive2.1.1安装部署

转:http://blog.csdn.net/zhongguozhichuang/article/details/52702476 一、Hive 运行模式 与 Hadoop 类似,Hive 也...
  • u014695188
  • u014695188
  • 2017-01-22 15:30:16
  • 8075

CentOS7基于Hadoop 2.7.3安装Hive 2.1.1

安装hive前提是要先安装hadoop集群,并且hive只需要再hadoop的namenode节点集群里安装即可(需要再所有namenode上安装),可以不在datanode节点的机器上安装。另外还需...
  • jssg_tzw
  • jssg_tzw
  • 2017-05-17 01:16:33
  • 5238

Linux中基于hadoop安装hive(CentOS7+hadoop2.8.0+hive2.1.1)

Linux中基于hadoop安装hive(CentOS7+hadoop2.8.0+hive2.1.1) 关键字:Linux Java CentOS Hadoop Hive...
  • pucao_cug
  • pucao_cug
  • 2017-05-12 22:37:27
  • 6698

【Hadoop】hive2.1在hadoop2.7.3中的可执行java代码

【hadoop】hive2.1在hadoop2.7.3中的可执行java代码
  • lsttoy
  • lsttoy
  • 2016-12-13 16:52:59
  • 684

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)

搭建Hadoop2.7.3+Hive2.1.1及MySQL
  • roy_88
  • roy_88
  • 2017-02-09 16:02:40
  • 3873

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive) Hive配置MySQL,配置常见问题。
  • roy_88
  • roy_88
  • 2017-03-08 23:35:54
  • 4835

Ubuntu15.04单机/伪分布式安装配置Hadoop与Hive试验机

环境 系统: Ubuntu 15.04 32bit Hadoop版本: hadoop-2.5.2.tar.gz JDK版本: jdk-8u-45-linux-i586.tar.gz Hive版本:...
  • joseph_lee2012
  • joseph_lee2012
  • 2015-04-29 16:05:29
  • 1397

hive搭建

hive是依赖于hadoop的数据仓库的工具,sql语句转换成mapreduce进行计算,存储在hdfs上 搭建hive 前期准备 配置jdk环境,hadoop环境,下载hive包解压,配置hi...
  • u010832253
  • u010832253
  • 2016-06-18 08:46:16
  • 228
收藏助手
不良信息举报
您举报文章:搭建Hive 2.1.1 基于Hadoop 2.6.1 和 Ubuntu 16.0.4 记录
举报原因:
原因补充:

(最多只允许输入30个字)