hive 2.3 mysql_hive2.3.2安装使用

hive的安装简单一些,使用也比较简单,基础hadoop搭建好之后,只要初始化一些目录和数据库就好了

安装需要做几件事:

1.设立一个数据源作为元数据存储的地方,默认是derby内嵌数据库,不过不允许远程连接,所以换成mysql

2.配置java路径和classpath路径

下载地址: http://mirrors.shuosc.org/apache/hive/hive-2.3.2/

发现一个问题:该地址会变化,所以不一定有效,可以到官网选择: http://www.apache.org/dyn/closer.cgi/hive/

解压后先配置hive环境变量

vi /etc/profile

添加:

export HIVE_HOME=/home/sri_udap/app/apache-hive-2.3.2-bin

export PATH=$PATH:$HIVE_HOME/bin

生效:

source /etc/profile

在conf目录下,拷贝模板进行配置:

mv hive-default.xml.template hive-site.xml

mv hive-env.sh.template hive-env.sh

先修改其他两个配置文件:

修改hadoop的配置文件hadoop-env.sh,修改内容如下:

export HADOOP_CLASSPATH=.:$CLASSPATH:$HADOOP_CLASSPATH:$HADOOP_HOME/bin

这里配置的classpath后,在后面执行hive初始化时仍然一直报java的类错误,查阅资料后,把他改成另一种更可靠的方式:

for f in $HADOOP_HOME/hadoop-*.jar; doCLASSPATH=${CLASSPATH}:$f

donefor f in $HADOOP_HOME/lib/*.jar; do

CLASSPATH=${CLASSPATH}:$f

done

for f in $HIVE_HOME/lib/*.jar; do

CLASSPATH=${CLASSPATH}:$f

done

在目录$HIVE_HOME/bin下面,修改文件hive-env.sh,增加以下内容:

export HADOOP_HOME=/home/sri_udap/app/hadoop-2.7.2

export HIVE_CONF_DIR=/home/sri_udap/app/apache-hive-2.3.2-bin/conf

export HIVE_AUX_JARS_PATH=/home/sri_udap/app/apache-hive-2.3.2-bin/lib

修改hive-site.xml文件,修改内容如下:

javax.jdo.option.ConnectionURL

jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

javax.jdo.option.ConnectionUserName

hivetest

javax.jdo.option.ConnectionPassword

hivetest

拷贝一个mysql的连接jar包到lib目录下,我用的是 mysql-connector-java-5.1.30.jar

然后到hdfs上建立一些基础目录hive-site.xml中配置的仓库地址等,手工创建(包括配置的hive的数据目录,仓库地址,日志等,并赋权):

bin/hadoop fs -mkdir -p /user/hive/warehouse

bin/hadoop fs -mkdir -p /user/hive/tmp

bin/hadoop fs -mkdir -p /user/hive/log

bin/hadoop fs -chmod -R 777 /user/hive/warehouse

bin/hadoop fs -chmod -R 777 /user/hive/tmp

bin/hadoop fs -chmod -R 777 /user/hive/log

这样就可以开始初始化了,先启动hadoop,然后在bin目录下执行命令

./schematool -initSchema -dbType mysql

此时应该有个错误:

Exception in thread "main"java.lang.RuntimeException: java.lang.IllegalArgumentException:java.net.URISyntaxException: Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

atorg.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)

atorg.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)

atorg.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)

atsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

atjava.lang.reflect.Method.invoke(Method.java:606)

atorg.apache.hadoop.util.RunJar.main(RunJar.java:160)

Caused by: java.lang.IllegalArgumentException:java.net.URISyntaxException: Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

atorg.apache.hadoop.fs.Path.initialize(Path.java:148)

atorg.apache.hadoop.fs.Path.(Path.java:126)

atorg.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:487)

atorg.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)

... 7more

Caused by: java.net.URISyntaxException:Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

atjava.net.URI.checkPath(URI.java:1804)

atjava.net.URI.(URI.java:752)

atorg.apache.hadoop.fs.Path.initialize(Path.java:145)

... 10more

这是因为无法识别"system:java.io.tmpdir",换成自己建立的临时目录就好,比如我的是:/home/sri_udap/app/apache-hive-2.3.2-bin/temp.

把hive-site.xml中有这个配置的都换掉.其实${system:user.name}这个变量也是不识别的,勤快的话把这个也替换一下,把system:去掉即可,否则会出现跟我一样的情况,会建立奇怪的目录:

[root@master temp]# ls

9c9855ee-f160-48d4-ab74-9d597c81bb13_resources c1d48876-f1c9-4f97-bc3a-f9743fecc417_resources ${system:user.name}

再进行一次初始化,然后可以看到mysql中建立了一些表,这样就完成了建立工作

简单使用:

建立几张表:(hive建立表后会在hdfs上多出一个和表明一样的目录,然后加载数据后会在目录下多出文件,在hive中,数据就是目录和文件)

新建两张表:

hive>CREATE TABLE t1(id int); //创建内部表t1,只有一个int类型的id字段

hive>CREATE TABLE t2(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; //创建内部表t2,有两个字段,它们之间通过tab分隔

然后,按照字段分隔要求弄两个txt文件,并加载到表里面:

[root@master temp]# cat t1.txt1

2

3

4

5

6

7

9

[root@master temp]# cat t2.txt1a2b3c9 x

加载数据:

hive>LOAD DATA LOCAL INPATH '/t1.txt' INTO TABLE t1; //从本地文件加载

hive>LOAD DATA INPATH 't2.txt' INTO TABLE t1; //从HDFS中加载

此时可以用一些简单的查询语句来查询hive,但是为了生成MapReduce作业,我们将语句写得稍微复杂些:

hive> select t2.name from t1 left join t2 on t1.id =t2.id;

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.

Query ID= root_20171228104347_a63966e5-d32a-41c9-a363-79aef39cac63

Total jobs= 1SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/sri_udap/app/apache-hive-2.3.2-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/sri_udap/app/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]2017-12-28 10:43:53 Starting to launch local task to process map join; maximum memory = 932184064

2017-12-28 10:43:54 Dump the side-table for tag: 1 with group count: 4 into file: file:/home/sri_udap/app/apache-hive-2.3.2-bin/temp/${system:user.name}/9c9855ee-f160-48d4-ab74-9d597c81bb13/hive_2017-12-28_10-43-47_556_6806677688398200490-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable2017-12-28 10:43:54 Uploaded 1 File to: file:/home/sri_udap/app/apache-hive-2.3.2-bin/temp/${system:user.name}/9c9855ee-f160-48d4-ab74-9d597c81bb13/hive_2017-12-28_10-43-47_556_6806677688398200490-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable (364bytes)2017-12-28 10:43:54 End of local task; Time Taken: 1.103sec.

Execution completed successfully

MapredLocal task succeeded

Launching Job1 out of 1Number of reduce tasks is set to0 since there's no reduce operator

Starting Job = job_1514424221956_0004, Tracking URL = http://master:8088/proxy/application_1514424221956_0004/

Kill Command = /home/sri_udap/app/hadoop-2.7.2/bin/hadoop job -kill job_1514424221956_0004

Hadoop job informationfor Stage-3: number of mappers: 1; number of reducers: 0

2017-12-28 10:44:10,516 Stage-3 map = 0%, reduce = 0%

2017-12-28 10:44:16,416 Stage-3 map = 100%, reduce = 0%, Cumulative CPU 1.88sec

MapReduce Total cumulative CPU time:1 seconds 880msec

Ended Job=job_1514424221956_0004

MapReduce Jobs Launched:

Stage-Stage-3: Map: 1 Cumulative CPU: 1.88 sec HDFS Read: 5568 HDFS Write: 205SUCCESS

Total MapReduce CPU Time Spent:1 seconds 880msec

OK

a

b

c

完,有问题欢迎交流

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值