ubuntu安装hive2.3.7

1 篇文章 0 订阅

一、配置MySQL

  1. 创建数据库
create database hive;
  1. 创建hive用户
create user 'hive'@'%' identified by 'hive';
  1. 授权
grant all on *.* to hive@'%' identified by 'hive';
flush privileges;
  1. 重启
sudo service mysql restart

二、安装hive

  1. 解压安装
sudo tar -zxvf ./apache-hive-2.3.7-bin.tar.gz -C /opt/bigdata
cd /opt/bigdata/
sudo mv apache-hive-2.3.7-bin hive-2.3.7
sudo chown -R hadoop:hadoop hive-2.3.7
  1. 配置环境变量
export HIVE_HOME=/opt/bigdata/hive-2.3.7
export PATH=${HIVE_HOME}/bin:$PATH

保存退出

source /etc/profile
  1. 修改配置文件hive-env.sh.template
cp hive-env.sh.template hive-env.sh
sudo vim hive-env.sh

如下

# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=/opt/bigdata/hadoop-2.9.2
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/bigdata/hive-2.3.7/conf
# Folder containing extra libraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/opt/bigdata/hive-2.3.7/lib
  1. 将mysql-connector-java-5.1.48.jar驱动拷贝到hive安装目录的lib下。

  2. 配置MySQL数据库信息

cp hive-default.xml.template hive-default.xml
sudo vim hive-default.xml

如下

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive</value>
    <description>password to use against metastore database</description>
  </property>
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    </description>
  </property>
  <property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>true</value>
    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
  </property>
</configuration>
  1. 启动服务
# 启动hadoop
/hadoop/sbin/start-dfs.sh
# 后台启动元数据服务
hive --service metastore &
# 启动hive
hive

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值