伪分布式MySQL安装_Hive伪分布式下安装

本安装过程只作为个人笔记用,非标准教程,请酌情COPY。:-D

Hive下载

因为自己安装的是hadoop2.7.0,所以就直接下载了Hive2.0.1版本安装。

Hive安装

注:由于Hive运行在Hadoop上,每个Hive发布的版本都可以和多个Hadoop版本共同工作。一般来说,Hive支持Hadoop的新老版本。

1. 解压后hive包位置在 /opt/apache-hive-2.0.1-bin 下。

[root@hadoop001 opt]# tar apache-hive-2.0.1-bin.tar.gz

2. 安装包授权给hadoop用户

[root@hadoop001 opt]# chown hadoop:hadoop -R apache-hive-2.0.1-bin/

3. 切回hadoop用户,并添加hive环境变量

[hadoop@hadoop001 ~]$ vim ~/.bash_profile

添加Hive路径

# User specific environment and startup programs

#java

export JAVA_HOME=/usr/java/jdk1.8.0_40/

# hadoop

HADOOP_HOME=/opt/hadoop-2.7.3

HIVE_HOME=/opt/apache-hive-2.0.1-bin

PATH=$PATH:$HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:JAVA_HOME/bin:$HIVE_HOME/bin

export PATH

应用一下环境变量文件

[hadoop@hadoop001 ~]$ source ~/.bash_profile

4. Hive的元数据

Hive元数据有三种存储方式

Derby:Hive默认的存储模式,缺点是不可并发调用Hive

本地Mysql:单节点存储,数据风险大

远程Mysql:需要网络传输

这里采用第二种方式,本地搭建Mysql元数据。

首先是安装Mysql

[hadoop@hadoop001 ~]$ yum -y install mysql-server

完成后配置开机启动

[root@hadoop001 hadoop]# chkconfig mysqld on

启动Mysql

[root@hadoop001 hadoop]# service mysqld start

因为是第一次安装,需要先初始化用户root的密码

[root@hadoop001 hadoop]# mysqladmin -u root password 'hive'

随后登录root用户,输入密码hive

[root@hadoop001 hadoop]# mysql -uroot –p

创建hive用户,密码hive,并创建hive源数据库

mysql> insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));

Query OK, 1 row affected, 3 warnings (0.00 sec)

mysql> create database hive;

Query OK, 1 row affected (0.00 sec)

mysql> grant all on hive.* to hive@'%' identified by 'hive';

Query OK, 0 rows affected (0.00 sec)

mysql> grant all on hive.* to hive@'localhost' identified by 'hive';

Query OK, 0 rows affected (0.00 sec)

mysql> grant all on hive.* to hive@'hadoop001' identified by 'hive';

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

5. 修改Hive配置文件

创建hive临时文件目录并全部授权给hadoop用户

[root@hadoop001 hive]# mkdir -p /tmp/hive//iotmp

[root@hadoop001 hive]# chown hadoop:hadoop -R /tmp/hive/

然后生成hive-site.xml

[root@hadoop001 hive]# cp /opt/apache-hive-2.0.1-bin/conf/hive-default.xml.template /opt/apache-hive-2.0.1-bin/conf/hive-site.xml

以下几项需要修改

javax.jdo.option.ConnectionURL

jdbc:mysql://hadoop001:3306/hive

JDBC connect string for a JDBC metastore

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

Driver class name for a JDBC metastore

javax.jdo.option.ConnectionPasswordhive

hive.hwi.listen.port

3306

This is the port the Hive Web Interface will listen on

datanucleus.schema.autoCreateAll

true

creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once

javax.jdo.option.ConnectionUserName

hive

Username to use against metastore database

hive.exec.local.scratchdir

/tmp/hive/iotmp

Local scratch space for Hive jobs

hive.downloaded.resources.dir

/tmp/hive/iotmp

Temporary local directory for added resources in the remote file system.

hive.querylog.location

/home/hdpsrc/hive/iotmp

Location of Hive run time structured log file

6. 配置mysql的jdbc驱动

下载mysql的jdbc驱动包,将mysql驱动包copy到 $HIVE_HOME/lib下

[root@hadoop001 lib]# mv /opt/soft/mysql-connector-java-5.1.17.jar /opt/apache-hive-2.0.1-bin/lib/

7.启动hadoop

start-dfs.sh

8. 启动hive,创建测试表

[hadoop@hadoop001 conf]$ hive

which: no hbase in (/usr/java/jdk1.8.0_40//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/opt/hadoop-2.7.3/bin:/opt/hadoop-2.7.3/sbin:JAVA_HOME/bin:/opt/apache-hive-2.0.1-bin/bin)

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-common-2.0.1.jar!/hive-log4j2.properties

Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.

hive> show databases;

OK

default

Time taken: 1.079 seconds, Fetched: 1 row(s)

hive> create table test(x int);

OK

Time taken: 0.56 seconds

hive> show tables;

OK

test

Time taken: 0.075 seconds, Fetched: 1 row(s)

8. 在mysql中查看新建表test的元数据

[root@hadoop001 apache-hive-2.0.1-bin]# mysql -u root -p

mysql> use hive;

mysql> show tables;

+---------------------------+

| Tables_in_hive |

+---------------------------+

| BUCKETING_COLS |

| CDS |

| COLUMNS_V2 |

| DATABASE_PARAMS |

| DBS |

| FUNCS |

| FUNC_RU |

| GLOBAL_PRIVS |

| PARTITIONS |

| PARTITION_KEYS |

| PARTITION_KEY_VALS |

| PARTITION_PARAMS |

| PART_COL_STATS |

| ROLES |

| SDS |

| SD_PARAMS |

| SEQUENCE_TABLE |

| SERDES |

| SERDE_PARAMS |

| SKEWED_COL_NAMES |

| SKEWED_COL_VALUE_LOC_MAP |

| SKEWED_STRING_LIST |

| SKEWED_STRING_LIST_VALUES |

| SKEWED_VALUES |

| SORT_COLS |

| TABLE_PARAMS |

| TAB_COL_STATS |

| TBLS |

| TBL_PRIVS |

| VERSION |

+---------------------------+

30 rows in set (0.00 sec)

d9172fe5d6b928a39190b8844d338d1e.png

查看TBLS表,可以看到新增的test表的属性信息。

至此,Hive安装完毕。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值