hive on tez详细配置和运行测试

2 篇文章 0 订阅

hive on tez详细配置和运行测试

标签(空格分隔): tez Hadoop Hive hdfs yarn


环境: hadoop-2.5.2 hive-0.14 tez-0.5.3 
hive on tez 的方式有两种安装配置方式:

  1. 在hadoop中配置
  2. 在hive中配置

比较: 
第二种方式:当已经有了稳定的hadoop集群,而不想动这个集群时,可以考虑采用第二种方式配置,第二种方式配置后只有hive的程序可以动态的切换执行引擎:set hive.execution.engine=mr;// tez/mr ;而其他的mapreduce程序只能在yarn上运行; 
第一种方式:侵入行较强,对原有的hadoop集群有影响,需要在hadoop的mapred-site.xml中配置:mapreduce.framework.name为yarn-tez,如果这样配置则意味着所有的通过本hadoop集群执行的mr任务都只能走tez方式提交任务,配置好后,hive默认的也就运行在tez上而不用其他的配置; 
以因此,在刚开始,想找到第二种的配置方式走了很多弯路

在开始前需要自己编译tez源码 此处略过

root@localhost:/opt/work# wget http://www.eu.apache.org/dist/tez/0.5.3/apache-tez-0.5.3-src.tar.gz
root@localhost:/opt/work# tar zxvf apache-tez-0.5.3-src.tar.gz
root@localhost:/opt/work# cd apache-tez-0.5.3-src
root@localhost:/opt/work/apache-tez-0.5.3-src# mvn clean package -DskipTests=true -Dmaven.javadoc.skip=true  //编译过程漫长啊,等待…..,中途有错误可以终止后再次执行mvn命令多次编译,编译成功之后目录结构如下
root@localhost:/opt/work/apache-tez-0.5.3-src# ll
总用量 204
drwxrwxr-x 15  500  500  4096  526 16:29 ./
drwxr-xr-x 38 root root  4096  526 16:34 ../
-rw-rw-r--  1  500  500  5753 125 06:25 BUILDING.txt
-rw-rw-r--  1  500  500 60199 125 06:25 CHANGES.txt
drwxrwxr-x  4  500  500  4096  526 16:30 docs/
-rw-rw-r--  1  500  500    66 125 06:25 .gitignore
lrwxrwxrwx  1  500  500    33 125 06:25 INSTALL.md -> docs/src/site/markdown/install.md
-rw-rw-r--  1  500  500 14470 125 06:25 KEYS
-rw-rw-r--  1  500  500 11358 125 06:25 LICENSE.txt
-rw-rw-r--  1  500  500   164 125 06:25 NOTICE.txt
-rw-rw-r--  1  500  500 34203 125 06:25 pom.xml
-rw-rw-r--  1  500  500  1433 125 06:25 README.md
drwxr-xr-x  3 root root  4096  526 16:29 target/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-api/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-common/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-dag/
drwxrwxr-x  4  500  500  4096  526 16:30 tez-dist/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-examples/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-mapreduce/
drwxrwxr-x  5  500  500  4096  526 16:29 tez-plugins/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-runtime-internals/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-runtime-library/
drwxrwxr-x  4  500  500  4096  526 16:29 tez-tests/
drwxrwxr-x  3  500  500  4096 125 06:25 tez-tools/
root@localhost:/opt/work/apache-tez-0.5.3-src# ll tez-dist/target/
总用量 40444
drwxr-xr-x 5 root root     4096  526 16:30 ./
drwxrwxr-x 4  500  500     4096  526 16:30 ../
drwxr-xr-x 2 root root     4096  526 16:30 archive-tmp/
drwxr-xr-x 2 root root     4096  526 16:30 maven-archiver/
drwxr-xr-x 3 root root     4096  526 16:30 tez-0.5.3/
-rw-r--r-- 1 root root 10625995  526 16:30 tez-0.5.3-minimal.tar.gz
-rw-r--r-- 1 root root 30757128  526 16:30 tez-0.5.3.tar.gz
-rw-r--r-- 1 root root     2791  526 16:30 tez-dist-0.5.3-tests.jar
root@localhost:/opt/work/apache-tez-0.5.3-src# 
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

编译后的tez-dist/target/tez-0.5.3.tar.gz 就是我们需要的tez组件的二进制包,并将tez-0.5.3.tar.gz上传到hdfs的一个目录中:

[hadoop@mymaster local]$ hadoop fs -mkdir /apps
[hadoop@mymaster local]$ hadoop fs -copyFromLocal tez-0.5.3.tar.gz /apps/
[hadoop@mymaster local]$ hadoop fs -ls /apps
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/oneapm/local/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/oneapm/local/tez-0.5.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Found 1 items
-rw-r--r--   2 hadoop supergroup   30757128 2015-05-26 16:53 /apps/tez-0.5.3.tar.gz
[hadoop@mymaster local]$ 
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

之后需要在hadoop的master节点上的$HADOOP_HOME/etc/hadoop/目录下创建tez-site.xml文件,内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <property>
                <name>tez.lib.uris</name>
                <value>${fs.defaultFS}/apps/tez-0.5.3.tar.gz</value>
        </property>
</configuration>
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

之上所作的都是必须的步骤,接下来分别描述hive on tez 的两种配置方式

第一种方式:在hadoop中配置

需要将tez的jar包加到$HADOOP_CLASSPATH路径下,在hadoop_env.sh文件的末尾,添加如下内容:

export TEZ_HOME=/oneapm/local/tez-0.5.3    #是你的tez的解压目录
for jar in `ls $TEZ_HOME |grep jar`; do
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$TEZ_HOME/$jar
done
for jar in `ls $TEZ_HOME/lib`; do
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$TEZ_HOME/lib/$jar
done
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

修改mapred-site.xml 文件

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn-tez</value>
</property>
 
 
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

修改之后将mapred-site.xml和hadoop_env.sh,tez-site.xml文件同步到集群所有的节点上,这里将会影响到整个集群,是我不想这么做的原因. 
运行tez的实例mr程序,验证是否安装成功:

[hadoop@mymaster tez-0.5.3]$ hadoop jar $TEZ_HOME/tez-examples-0.5.3.jar orderedwordcount /license.txt /out
 
 
  • 1
  • 1

当然license.txt 请自行准备上传到hdfs即可,如果运行顺利,查看8088端口如下: 
此处输入图片的描述
箭头所示的application type为TEZ,表示安装成功

第二种方式:在hive中配置

第二种配置开始前,请将第一步的步骤取消,保证hadoop的配置文件恢复到原状,tez-site.xml文件只放在master一台节点上即可;

将tez下的jar和tez下的lib下的jar包复制到hive的$HIVE_HOME/lib目录下即可 
配置过程中,我的hive和hadoop的master在同一个节点上,以hadoop用户启动运行hive,tez/mr一切顺利,但是考虑到与master放在一个节点运行 master节点物理资源不足,所以将hive同样的配置迁移到另一台干净的主机hiveclient上:运行hive on mr任务顺利;运行hive ont tez就不行,错误如下:

hive (default)> set hive.execution.engine=tez;                                                
hive (default)> select json_udtf(data) from tpm.tps_dc_metricdata where pt=2015060200 limit 1;
Query ID = blueadmin_20150603130202_621abba7-850e-4683-8331-aee8482f2ebe
Total jobs = 1
Launching Job 1 out of 1
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
hive (default)> 
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask 
引起这个错误的原因很多,只从这里看不出来到底是哪里有问题, 只能看hive的运行job日志了,日志在你的 HIVEHOME/confhivelog4j.propertieshive.log.dir= {Java.io.tmpdir}/ user.name,使,/tmp/ {user}/目录下生成hive的job日志和运行日志,在log中看到如下的信息:

2015-06-03 13:03:01,071 INFO  [main]: tez.DagUtils (DagUtils.java:createLocalResource(718)) - Resource modification time: 1433307781075
2015-06-03 13:03:01,126 ERROR [main]: exec.Task (TezTask.java:execute(184)) - Failed to execute tez graph.
java.io.FileNotFoundException: File does not exist: hdfs:/user/hivetest
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
        at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:774)
        at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:870)
        at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:337)
        at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:158)
        at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)

        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
2015-06-03 13:03:01,127 ERROR [main]: ql.Driver (SessionState.java:printError(833)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.Tez
Task
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

说hdfs没有/user/hivetest 目录,确实,我的hiveclient主机上运行的hive是以hivetest用户运行的,它在hdfs上没有自己的home目录,那么没有目录,就创建目录:

[hivetest@mymaster tez-0.5.3]$hadoop fs -mkdir /user/hivetest
 
 
  • 1
  • 1

如此依赖问题解决,重新进入hive即可,接下来为hive on tez/yarn的初步测试结果

启动hive运行测试

hive (default)> set hive.execution.engine=tez;
hive (default)> select t.a,count(1) from (select split(data,'\t')[1] a,split(data,'\t')[2] b from tpm.tps_dc_metricdata limit 1000) t group by t.a ;
Query ID = hadoop_20150526184141_556cf5d8-edf3-430a-b21a-513c35679567
Total jobs = 1
Launching Job 1 out of 1
Tez session was closed. Reopening...
Session re-established.


Status: Running (Executing on YARN cluster with App id application_1432632452478_0005)

--------------------------------------------------------------------------------
        VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
--------------------------------------------------------------------------------
Map 1 ..........   SUCCEEDED     26         26        0        0       0       0
Reducer 2 ......   SUCCEEDED      1          1        0        0       0       0
Reducer 3 ......   SUCCEEDED      1          1        0        0       0       0
--------------------------------------------------------------------------------
VERTICES: 03/03  [==========================>>] 100%  ELAPSED TIME: 24.60 s    
--------------------------------------------------------------------------------
OK
t.a     _c1
1       17
10      7
100     3
101     6
105     1
117     2
118     2
119     11
12      3
120     3
121     1
123     4
124     4
125     16
126     4
127     9
129     10
142     6
221     1
此处省略n条打印记录
Time taken: 30.637 seconds, Fetched: 207 row(s)
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43

set hive.execution.engine=tez; 即执行引擎为tez 如果想用yarn,则设置为:set hive.execution.engine=mr;即可 
tez执行过程中有个已经很漂亮的进度条,如上所示; 执行查询1000条记录

hive on yarn

hive (tpm)> set hive.execution.engine=mr;
hive (tpm)> select t.a,count(1) from (select split(data,'\t')[1] a,split(data,'\t')[2] b from tpm.tps_dc_metricdata limit 1000) t group by t.a ;
Query ID = hadoop_20150526140606_d73156e0-c81c-4b2a-bfb6-fd1d48fa8325
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1432521221608_0008, Tracking URL = http://mymaster:8088/proxy/application_1432521221608_0008/
Kill Command = /oneapm/local/hadoop-2.5.2/bin/hadoop job  -kill job_1432521221608_0008
Hadoop job information for Stage-1: number of mappers: 70; number of reducers: 1
2015-05-26 14:06:53,584 Stage-1 map = 0%,  reduce = 0%
2015-05-26 14:07:13,931 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 3.46 sec
2015-05-26 14:07:15,004 Stage-1 map = 9%,  reduce = 0%, Cumulative CPU 21.37 sec
2015-05-26 14:07:18,198 Stage-1 map = 12%,  reduce = 0%, Cumulative CPU 43.02 sec
2015-05-26 14:07:19,260 Stage-1 map = 19%,  reduce = 0%, Cumulative CPU 47.7 sec
2015-05-26 14:07:20,322 Stage-1 map = 20%,  reduce = 0%, Cumulative CPU 48.52 sec
省略打印
OK
t.a     _c1
1       15
10      8
100     1
101     3
102     2
104     5
105     1
106     3
107     10
109     2

省略打印
Time taken: 152.971 seconds, Fetched: 207 row(s)
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

本次测试结果:on tez比on yarn上快大约5倍左右的速度;

对多个hive stage的sql优化显著,测试结果根据不同的平台可能有不同程度的差异

总结: 1.根据如上第二种的配置,集群默认的还是yarn,hive可以在mr和tez之间自由切换而对原有的hadoop mr任务没有影响,还是yarn,运行的状态可以下8088端口下看,hive的命令行终端运行tez是的进度条挺漂亮;

参考

官网:此处输入链接的描述 
不错的博客:此处输入链接的描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值