Hive3.1.2编译

6 篇文章 0 订阅

1、修改pom文件

    <spark.version>3.1.1</spark.version>
    <scala.binary.version>2.12</scala.binary.version>
    <scala.version>2.12.10</scala.version>
	<hadoop.version>3.2.2</hadoop.version>
	<guava.version>27.0-jre</guava.version>
	<druid.version>0.12.3</druid.version>

2、需要修改的相关类

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-llap-common: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/llap-common/src/java/org/apache/hadoop/hive/llap/AsyncPbRpcProxy.java:[173,16] method addCallback in class 
com.google.common.util.concurrent.Futures cannot be applied to given types;

[ERROR] /opt/workspace/hive/llap-common/src/java/org/apache/hadoop/hive/llap/AsyncPbRpcProxy.java:[274,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<java.lang.Void>,<anonymous com.google.common.util.concurrent.FutureCallback<java.lang.Void>>
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)


[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-llap-tez: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java:[747,14] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<java.lang.Void>,org.apache.hadoop.hive.llap.tezplugins.scheduler.LoggingFutureCallback
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)


[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-spark-client: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/spark-client/src/main/java/org/apache/hive/spark/counter/SparkCounter.java:[22,24] cannot find symbol
[ERROR]   symbol:   class Accumulator
[ERROR]   location: package org.apache.spark
[ERROR] /opt/workspace/hive/spark-client/src/main/java/org/apache/hive/spark/counter/SparkCounter.java:[23,24] cannot find symbol
[ERROR]   symbol:   class AccumulatorParam


[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-spark-client: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/spark-client/src/main/java/org/apache/hive/spark/client/metrics/ShuffleWriteMetrics.java:[50,39] cannot find symbol
[ERROR]   symbol:   method shuffleBytesWritten()
[ERROR]   location: class org.apache.spark.executor.ShuffleWriteMetrics
[ERROR] /opt/workspace/hive/spark-client/src/main/java/org/apache/hive/spark/client/metrics/ShuffleWriteMetrics.java:[51,36] cannot find symbol
[ERROR]   symbol:   method shuffleWriteTime()
[ERROR]   location: class org.apache.spark.executor.ShuffleWriteMetrics
[ERROR] -> [Help 1]

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-exec: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java:[1095,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<capture#1 of ?>,com.google.common.util.concurrent.FutureCallback<java.lang.Object>
[ERROR]   reason: cannot infer type-variable(s) V

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:testCompile (default-testCompile) on project hive-exec: Compilation failure
[ERROR] /opt/workspace/hive/ql/src/test/org/apache/hadoop/hive/ql/stats/TestStatsUtils.java:[34,39] package org.spark_project.guava.collect does not exist
[ERROR] 

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:testCompile (default-testCompile) on project hive-exec: Compilation failure
[ERROR] /opt/workspace/hive/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/SampleTezSessionState.java:[121,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<java.lang.Boolean>,<anonymous com.google.common.util.concurrent.FutureCallback<java.lang.Boolean>>
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-llap-server: Compilation failure: Compilation failure: 
[ERROR] /opt/workspace/hive/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/AMReporter.java:[162,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<java.lang.Void>,<anonymous com.google.common.util.concurrent.FutureCallback<java.lang.Void>>
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)
[ERROR] /opt/workspace/hive/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/TaskExecutorService.java:[178,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<capture#1 of ?>,org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.WaitQueueWorkerCallback
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)
[ERROR] /opt/workspace/hive/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapTaskReporter.java:[131,12] method addCallback in class com.google.common.util.concurrent.Futures cannot be applied to given types;
[ERROR]   required: com.google.common.util.concurrent.ListenableFuture<V>,com.google.common.util.concurrent.FutureCallback<? super V>,java.util.concurrent.Executor
[ERROR]   found: com.google.common.util.concurrent.ListenableFuture<java.lang.Boolean>,org.apache.hadoop.hive.llap.daemon.impl.LlapTaskReporter.HeartbeatCallback
[ERROR]   reason: cannot infer type-variable(s) V
[ERROR]     (actual and formal argument lists differ in length)
[ERROR] -> [Help 1]

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hive-druid-handler: Compilation failure
[ERROR] /opt/workspace/hive/druid-handler/src/java/org/apache/hadoop/hive/druid/serde/DruidScanQueryRecordReader.java:[46,61] <T>emptyIterator() is not public in com.google.common.collect.Iterators; cannot be accessed from outside package

3、执行编译命令

mvn clean package -Pdist -DskipTests

4、Hive运行报错

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.spark.deploy.SparkHadoopUtil$.$anonfun$appendSparkHadoopConfigs$6(SparkHadoopUtil.scala:481)
        at org.apache.spark.deploy.SparkHadoopUtil$.$anonfun$appendSparkHadoopConfigs$6$adapted(SparkHadoopUtil.scala:480)
        at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
        at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
        at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
        at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendSparkHadoopConfigs(SparkHadoopUtil.scala:480)
        at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:454)
        at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
        at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

        at org.apache.hive.spark.client.rpc.RpcServer.cancelClient(RpcServer.java:211) ~[hive-exec-3.1.2.jar:3.1.2]
        at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:491) ~[hive-exec-3.1.2.jar:3.1.2]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
2021-05-24T17:20:12,167 ERROR [Thread-14] spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 63df0c54-c8f5-495d-bcb9-135a6b9f4ac9)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session 63df0c54-c8f5-495d-bcb9-135a6b9f4ac9
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:215)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:76)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Cancel client '63df0c54-c8f5-495d-bcb9-135a6b9f4ac9'. Error: Child process (spark-submit) exited before connecting back with error log SLF4J: Class path contains multiple SLF4J bindings.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是在 CentOS 7 上安装 Hive 3.1.2 的步骤: 1. 安装 Java JDK Hive 需要运行在 Java 环境中,因此需要先安装 Java JDK。 ```bash sudo yum install java-1.8.0-openjdk-devel ``` 2. 下载并解压 Hive 在 Apache Hive 的官网 https://hive.apache.org/downloads.html 下载 Hive 3.1.2 的压缩包,并解压到指定的目录。 ```bash # 下载 wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz # 解压 tar -zxvf apache-hive-3.1.2-bin.tar.gz sudo mv apache-hive-3.1.2-bin /usr/local/hive ``` 3. 配置环境变量 编辑 /etc/profile 文件,添加以下内容: ```bash export HIVE_HOME=/usr/local/hive export PATH=$PATH:$HIVE_HOME/bin ``` 保存并退出,执行以下命令使环境变量生效: ```bash source /etc/profile ``` 4. 配置 Hive 在 $HIVE_HOME/conf 目录下,将 hive-env.sh.template 文件复制为 hive-env.sh,并修改其中的 JAVA_HOME 变量指向 Java JDK 的安装路径。 ```bash cp $HIVE_HOME/conf/hive-env.sh.template $HIVE_HOME/conf/hive-env.sh vi $HIVE_HOME/conf/hive-env.sh # 修改为 export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk ``` 5. 配置 Hadoop 将 Hadoop 的配置文件 core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml 复制到 $HIVE_HOME/conf 目录下。 ```bash cp $HADOOP_HOME/etc/hadoop/core-site.xml $HIVE_HOME/conf/ cp $HADOOP_HOME/etc/hadoop/hdfs-site.xml $HIVE_HOME/conf/ cp $HADOOP_HOME/etc/hadoop/mapred-site.xml $HIVE_HOME/conf/ cp $HADOOP_HOME/etc/hadoop/yarn-site.xml $HIVE_HOME/conf/ ``` 6. 配置元数据库 Hive 需要一个元数据库来存储它的元数据。在本地安装 Derby 数据库作为 Hive 的元数据库。 ```bash # 下载 Derby 数据库 wget https://mirrors.tuna.tsinghua.edu.cn/apache/db/derby/db-derby-10.15.2.0/db-derby-10.15.2.0-bin.tar.gz # 解压 tar -zxvf db-derby-10.15.2.0-bin.tar.gz sudo mv db-derby-10.15.2.0-bin /usr/local/derby # 配置环境变量 echo 'export DERBY_HOME=/usr/local/derby' >> /etc/profile echo 'export PATH=$PATH:$DERBY_HOME/bin' >> /etc/profile source /etc/profile # 创建元数据库 mkdir -p /usr/local/derby/data derby/bin/startNetworkServer & $HIVE_HOME/bin/schematool -dbType derby -initSchema ``` 7. 启动 Hive 执行以下命令启动 Hive: ```bash hive ``` 如果一切顺利,就可以看到 Hive 的命令行界面了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值