Hive安装与配置

1 环境说明

软件版本说明

  • 操作系统:centos07
  • JDK版本:1.8.0_172
  • Hadoop版本:2.7.6
  • Hive版本:Hive-2.2.0
  • mysql版本:mysql-5.7.22

2 Hive安装与配置

2.1 HIVE下载

下载地址:
http://archive.apache.org/dist/hive/hive-2.2.0/

2.2 上传到Linux指定目录

D:\Program Files\putty> pscp C:\Users\sgmcumt\Downloads\apache-hive-2.2.0-bin.tar.gz root@hadoopmaster:/usr/local/soft

2.3 进入hive上传的目录,并解压到指定目录

[root@hadoopmaster ~]# cd /usr/local/soft
[root@hadoopmaster soft]# tar -zxvf apache-hive-2.2.0-bin.tar.gz -C /usr/local/hadoop/

2.4 修改Hive目录

由于解压后的目录名字太长,所以修改一下目录名

[root@hadoopmaster hadoop]# mv apache-hive-2.2.0-bin hive-2.2.0

2.5 配置变量

进入/usr/local/hadoop/hive-2.2.0/conf目录,修改hive-env.sh和hive-site.xml文件

hive-env.sh中添加HADOOP_HOME和HIVE_CONF_DIR变量

[root@hadoopmaster conf]# cp hive-env.sh.template hive-env.sh
[root@hadoopmaster conf]# vi hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
 HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.6
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/usr/local/hadoop/hive-2.2.0/conf

hive-site.xml的配置

网上大多数教程是让复制hive-site.xml.template,并修改为hive-site.xml,然后修改里面的内容。

由于hive-site.xml.template文件的内容比较多,我们实际修改的内容并不多,因此上面的操作比较麻烦,这里我们直接创建hive-site.xml。

[root@hadoopmaster conf]# vi hive-site.xml

插入的内容如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>hive.exec.scratchdir</name>
    <value>/usr/local/hadoop/hive-2.2.0/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>
  <property>
    <name>hive.repl.rootdir</name>
    <value>/usr/local/hive-2.2.0/hive/repl/</value>
    <description>HDFS root dir for all replication dumps.</description>
  </property>
  <property>
    <name>hive.repl.cmrootdir</name>
    <value>/usr/local/hive-2.2.0/hive/cmroot/</value>
    <description>Root dir for ChangeManager, used for deleted files.</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/usr/local/hive-2.2.0/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/usr/local/hive-2.2.0/tmp/${system:user.name}/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>
  <property>
    <name>hive.querylog.location</name>
    <value>/usr/local/hadoop/hive-2.2.0/tmp/${system:user.name}</value>
    <description>Location of Hive run time structured log file</description>
  </property>
  <property>
    <name>hive.metastore.schema.verification</name>  
    <value>false</value>  
      <description>  
      Enforce metastore schema version consistency.  
      True: Verify that version information stored in metastore matches with one from Hive jars.  Also disable automatic  
            schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures  
            proper metastore schema migration. (Default)  
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.  
      </description>
    </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://hadoopmaster:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>
</configuration>

注:后面四项根据自己选用的数据库确定,这里用的数据库是mysql。

2.6配置hive环境变量

[root@hadoopmaster conf]# vi /etc/profile

#添加的内容
export HIVE_HOME=/usr/local/hadoop/hive-2.2.0
export PATH=$PATH:$HIVE_HOME/bin
#环境变量立即生效
[root@hadoopmaster conf]# . /etc/profile

3 本地mysql模式搭建

3.1 安装MySQL

具体安装步骤参照:mysql离线安装-linux通用版本

3.2 下载mysql-connector-java-5.1.46.jar

下载地址:
https://dev.mysql.com/downloads/connector/j/

下载完成后,解压压缩包

3.3 上传mysql-connector-java-5.1.10.jar到$HIVE_HOME/lib目录下

D:\Program Files\putty> pscp E:\hadoop\mysql-connector-java-5.1.46\mysql-connector-java-5.1.46.jar root@hadoopmaster:/usr/local/hadoop/hive-2.2.0/lib

注:如果不会上传,可以参考:使用pscp实现Windows 和 Linux服务器间远程传递文件

3.4 对数据库进行初始化

[root@hadoopmaster conf]# schematool -initSchema -dbType mysql
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hive-2.2.0/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:mysql://hadoopmaster:3306/hive?createDatabaseIfNotExist=true&useSSL=false
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:       root
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Initialization script completed
schemaTool completed

3.5 验证安装是否正确

进入hive,然后创建person表。

[root@hadoopmaster conf]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hive-2.2.0/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/hadoop/hive-2.2.0/lib/hive-common-2.2.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> create table person(id int,name string,age int,address string);
OK
Time taken: 4.786 seconds
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值