hive1.1.0-cdh5.16.2集群搭建一步到位-附安装包

前言

上次部署了hadoop2.6.0-cdh5.16.2环境( hadoop-2.6.0-cdh5.16.2集群最详细、一步到位搭建、附hadoop2.6.0-cdh5.16.2安装包),现在紧接着基于这套环境搭建hive1.1.0-cdh5.16.2集群,找不到安装包的小伙伴,可以用下面提供的安装包。

一、服务器规划

服务分布

hadoop环境基于上一篇 hadoop-2.6.0-cdh5.16.2集群最详细、一步到位搭建、附hadoop2.6.0-cdh5.16.2安装包

节点hiveserver2hive_metastoremysql
hadoop-002
(192.168.1.11)
*
hadoop-003
(192.168.1.12)
**

部署路径

/home/server

开放端口

hive:9083、10000

二、所有服务器下载安装包并解压

安装包:

hive:hive1.1.0-cdh5.16.2(网盘:https://pan.baidu.com/s/1KabIjXRKF2eecHVfXzqZlA?pwd=alfk

解压到:

/home/server/hive-1.1.0-cdh5.16.2

三、部署mysql

本次使用docker方式简单搭建mysql8.0.28

按照规划,部署在 hadoop-003 服务器

进入hadoop-003服务器,创建目录

sudo mkdir /home/server/mysql
sudo mkdir /home/server/mysql/conf
sudo mkdir /home/server/mysql/data

在/home/server/mysql 创建docker-compose.yml

version: "3.1"
services:
  mysql57:
    image: mysql:8.0.28
    container_name: mysql_hive_metastore
    ports:
      - 3306:3306
    volumes:
      - ./data:/var/lib/mysql
      - ./conf:/etc/mysql/mysql.conf.d
    environment:
      MYSQL_ROOT_PASSWORD: "123456"

cd进入/home/server/mysql ,启动mysql

cd /home/server/mysql
sudo docker-compose up -d

三、部署hive

在hadoop-002服务器使用hadoop登陆服务器进行操作

ssh hadoop@localhost

修改目录所有者

sudo chown -R hadoop:develop /home/server/hive-1.1.0-cdh5.16.2

进入hive目录

cd /home/server/hive-1.1.0-cdh5.16.2

将mysql的jdbc驱动放入hive的lib目录下

暂时无法在飞书文档外展示此内容

放进 /home/server/hive-1.1.0-cdh5.16.2/lib

创建 conf/hive-site.xml

写入下面配置

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 如果 mysql 和 hive 在同一个服务器节点,那么请更改 master 为 localhost -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://hadoop-003:3306/hive110cdh_metastore?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>
   <!-- 指定存储元数据要连接的地址 -->
 <property>
   <name>hive.metastore.uris</name>
   <value>thrift://hadoop-003:9083</value>
 </property>
  <!-- 指定 hiveserver2 连接的 host -->
 <property>
   <name>hive.server2.thrift.bind.host</name>
   <value>hadoop-002</value>
 </property>
 <!-- 指定 hiveserver2 连接的端口号 -->
 <property>
   <name>hive.server2.thrift.port</name>
   <value>10000</value>
 </property>
<property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>true</value>
    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
  </property>
<property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    </description>
  </property>
  <!--让hive自动选择local还是yarn来跑任务-->
  <property>  
    <name>hive.exec.mode.local.auto</name>  
    <value>true</value>  
  </property>
</configuration>

修改hive的jvm堆内存

将 conf/hive-env.sh.template 复制出一份 conf/hive-env.sh

将里面的 export HADOOP_HEAPSIZE=1024 的注释去掉

修改日志输出路径

将 conf/hive-log4j.properties.template 复制一份 conf/hive-log4j.properties

修改 conf/hive-log4j.properties里的hive.log.dir参数值

hive.log.dir=/home/server/hive-1.1.0-cdh5.16.2/logs

将 conf/hive-exec-log4j.properties.template 复制一份 conf/hive-exec-log4j.properties

修改 conf/hive-exec-log4j.properties 里的hive.log.dir参数值

hive.log.dir=/home/server/hive-1.1.0-cdh5.16.2/logs

将修改分发到hadoop-003

scp -r conf/ hadoop@hadoop-003:/home/server/hive-1.1.0-cdh5.16.2

将hadoop部署路径写到环境变量中

在hadoop-002、hadoop-003两台机器中的 /etc/bashrc 或 /etc/profile 加入环境变量

# hadoop
export HADOOP_HOME=/home/server/hadoop-2.6.0-cdh5.16.2

记得用source重新加载环境变量

启动metastore

hadoop账号登陆hadoop-003机器,进入hive目录

初始化metastore

./bin/schematool -initSchema -dbType mysql -verbose

启动metastore

nohup ./bin/hive --service metastore > nohup.out 2>&1 &

启动hiveserver2

hadoop账号登陆hadoop-002机器,进入hive目录

启动hiveserver2

nohup ./bin/hive --service hiveserver2 > nohup.out 2>&1 &

验证

在hadoop-002服务器使用beeline工具连接hive

进入hive目录

./bin/beeline -u jdbc:hive2://localhost:10000

分别试试这些sql

show tables;

create table test001(id int, name varchar(100));

insert into test001 values(1, 'zhangsan'),(2, 'lisi');

select * from test001;

select * from (SELECT *, row_number() over(order by `id` DESC) as row_num_ FROM `test001` ORDER BY `id` DESC) t where t.row_num_ between 1 and 10;

如果有报错说什么目录目录没有权限访问

那就进入 hadoop的部署目录,执行

./bin/hadoop dfs -chmod -R 777 权限访问的目录路径
  • 19
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值