Flink-1.11使用sql-client连接Hive-2.1.1

Flink-1.11使用sql-client连接Hive写在前面:Sql-client连接Hive是通过客户端的形式读写hive、测试相关语句的便捷方式,也是Flink搭建后测试操作Hive的第一步。前面因为版本问题一直没有进展,Flink在1.9.1版本中只面向Hive-1.2.1、Hive-2.3.4,而在不改变Hive版本的情况下,其他实现方式较为繁琐。而Flink-1.11版本中,已经...
摘要由CSDN通过智能技术生成

写在前面:

Sql-client连接Hive是通过客户端的形式读写hive、测试相关语句的便捷方式,也是Flink搭建后测试操作Hive的第一步。前面因为版本问题一直没有进展,Flink在1.9.1版本中只面向Hive-1.2.1、Hive-2.3.4,而在不改变Hive版本的情况下,其他实现方式较为繁琐。而Flink-1.11版本中,已经全面支持Hive的大多数版本的连接,对比源码可以发现Flink在新版本中实现了不同hive版本api的对接。

Flink-1.9.1:
在这里插入图片描述
Flink-1.11:
在这里插入图片描述

0. 环境准备

基于cdh-6.1.0编译flink-1.11-SNAPSHOT

1. Jar包准备

在{flink-home}/lib目录下放入以下jar包:

hive-2.1.0-bin/lib/hive-exec-2.1.0.jar

{flink-compile-home}/flink-connectors/flink-connector-hive/target/flink-connector-hive_2.11-1.11-SNAPSHOT.jar

{flink-shaded-9.0-compile-home}/…/flink-shaded-hadoop-2-uber-3.0.0-cdh6.1.0-9.0.jar

2. 修改配置

修改flink-1.11-SNAPSHOT/conf/sql-client-defaults.yaml配置:

#==============================================================================
# Catalogs
#==============================================================================

# Define catalogs here.

#catalogs: [] # empty list
# A typical catalog definition looks like:
#  - name: myhive
#    type: hive
#    hive-conf-dir: /opt/hive_conf/
#    default-database: ...
catalogs:
   - name: myhive
     type: hive
     hive-conf-dir: /etc/alternatives/hive-conf
     hive-version: 2.1.1

3. 启动sql-client

启动flink(因为sql-client一些任务需要提交到flink计算执行)

bin/start-cluster.sh

启动sql-client

bin/sql-client.sh

执行:

use catalog myhive;
use test_myq;
select * from mytable;

如果报以下错误,先检查flink是否正常启动:

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.runtime.rest.util.RestClientException: [Failed to deserialize JobGraph.]

执行报以下错误:

org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
        at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)
        at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
        at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
        at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
        at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
        at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
        ... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
        at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:152)
        at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
        at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)
        at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
        ... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input splits caused an error: Permission denied: user=root, access=READ_EXECUTE, inode="/user/hive/warehouse/test_myq.db/mytable":hdfs:hive:drwxrwx--t
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:262)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1853)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1837)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1787)
        at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:79)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:3733)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1138)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:708)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值