最近在跑测试的时候,发现自己的Impala开发环境无法执行Hive查询,报错是Tez作业执行失败:
Failing this attempt.Diagnostics: [2023-04-23 10:07:38.941]Exception from container-launch.
Container id: container_1682215633225_0001_01_000001
Exit code: 1
[2023-04-23 10:07:38.969]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
For more detailed output, check the application tracking page: http://localhost:8088/cluster/app/application_1682215633225_0001 Then click on links to logs of each attempt.
. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:979) ~[tez-api-0.9.1.7.2.17.0-160.jar:0.9.1.7.2.17.0-160]
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:948) ~[tez-api-0.9.1.7.2.17.0-160.jar:0.9.1.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:572) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:385) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:300) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.open(TezSessionPoolSession.java:106) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.ensureSessionHasResources(TezTask.java:463) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:224) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:360) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:333) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:250) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:111) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:806) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:540) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:534) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:232) ~[hive-service-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:89) ~[hive-service-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:338) ~[hive-service-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_362]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_362]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.1.1.7.2.17.0-160.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:358) ~[hive-service-3.1.3000.7.2.17.0-160.jar:3.1.3000.7.2.17.0-160]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_362]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_362]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_362]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_362]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]
按提示打开Application地址:http://localhost:8088/cluster/app/application_1682215633225_0001
进入失败的attempt日志,查看 launch_container.sh 日志,发现CLASSPATH有问题:
export CLASSPATH="$PWD:$PWD/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/home/quanlong/workspace/Impala/toolchain/cdp_components-36109364/tez-0.9.1.7.2.17.0-75-minimal/*:/home/quanlong/workspace/Impala/toolchain/cdp_components-36109364/tez-0.9.1.7.2.17.0-75-minimal/lib/*:"
这里用到了 toolchain/cdp_components-36109364/tez-0.9.1.7.2.17.0-75-minimal,对应的CDP版本是 7.2.17.0-75。但我本地的CDP版本已升级到了 7.2.17.0-160(在上面报错堆栈的 jar 名后缀也可以验证)。
检查本地配置,发现以下两个文件有问题:
fe/src/test/resources/yarn-site.xml
fe/target/test-classes/yarn-site.xml
它们配的classpath都用了老的Tez目录:
<property>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*,/home/quanlong/workspace/Impala/toolchain/cdp_components-36109364/tez-0.9.1.7.2.17.0-75-minimal/*,/home/quanlong/workspace/Impala/toolchain/cdp_components-36109364/tez-0.9.1.7.2.17.0-75-minimal/lib/*</value>
</property>
于是只需要重新生成配置即可(不需要重启Yarn或Hive):
bin/create-test-configuration.sh
cp fe/src/test/resources/yarn-site.xml fe/target/test-classes/yarn-site.xml
总结
如果bin/impala-config.sh有更新改动了CDP_BUILD_NUMBER,需要重新执行 bin/create-test-configuration.sh。如果没有重新编译FE,需要手动更新 fe/target/test-classes/ 下的配置文件。