使用数据库客户端工具Dbeaver连接Hive和MySQL

第一章:Dbeaver连接MySQL

第二章:Dbeaver连接Hive

第三章:运行HQL语句(底层跑MapReduce)报错

第一章:Dbeaver连接MySQL

1、关于下载安装以及Dbeaver连接MySQL、请看楼主的这篇博客:

第二章:Dbeaver连接Hive

第一步:
在这里插入图片描述
第二步:
在这里插入图片描述

  • 点击测试链接会跳出加载驱动的弹窗

第三步:
1、使用maven下载的方式添加hive-jdbc及hadoop-common

  • 也可以尝试在idea中下载好后,以添加文件夹的方式添加进去;这两个缺一不可,否则会报错:org.apache.hadoop.conf.Configuration (是因为没有添加hadoop-common jar包的原因)
<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-common</artifactId>
			<version>2.6.0</version>
		</dependency>
 
		<dependency>
			<groupId>org.apache.hive</groupId>
			<artifactId>hive-jdbc</artifactId>
			<version>1.1.0</version>
		</dependency>

在这里插入图片描述

  • 我们在linux中使用的Hive版本是hive-1.1.0-cdh5.7.0,Hadoop版本是hadoop2.6.0-cdh5.7.0

如下图所示:连接成功。

在这里插入图片描述

第三章:运行HQL语句(底层跑MapReduce)报错

  • SQL 错误 [1] [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

详细信息如下:

org.jkiss.dbeaver.model.sql.DBSQLException: SQL 错误 [1] [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:134)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:467)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:407)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:146)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:405)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:849)
	at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:2778)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:98)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:146)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:96)
	at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:102)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:296)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:338)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
	... 11 more

原因:

  • 猜测是缺少jar包导致的

同样的SQL语句在Hive中运行能直接出结果的:

hive (default)> select u.product_id,u.city_id,c.city_name,c.area
              > from
              > (select product_id,city_id from user_click where day = '2019-09-16') as u
              > join
              > (select city_id,city_name,area from city_info) as c
              > on u.city_id = c.city_id
              > limit 10;
Query ID = hadoop_20190913000505_d7a76592-e5c5-46b1-9f9d-16f2fbea1f94
Total jobs = 1
19/09/13 05:42:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Execution log at: /tmp/hadoop/hadoop_20190913000505_d7a76592-e5c5-46b1-9f9d-16f2fbea1f94.log
2019-09-13 05:42:17     Starting to launch local task to process map join;      maximum memory = 518979584
2019-09-13 05:42:25     Dump the side-table for tag: 1 with group count: 10 into file: file:/tmp/hadoop/6b4eb2bd-6a01-48f6-bfc3-28921700835e/hive_2019-09-13_05-40-12_985_394165976221687163-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2019-09-13 05:42:26     Uploaded 1 File to: file:/tmp/hadoop/6b4eb2bd-6a01-48f6-bfc3-28921700835e/hive_2019-09-13_05-40-12_985_394165976221687163-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (561 bytes)
2019-09-13 05:42:26     End of local task; Time Taken: 9.436 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1568279387025_0012, Tracking URL = http://hadoop004:8088/proxy/application_1568279387025_0012/
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1568279387025_0012
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2019-09-13 05:51:02,733 Stage-3 map = 0%,  reduce = 0%
2019-09-13 05:52:03,668 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.93 sec
MapReduce Total cumulative CPU time: 3 seconds 930 msec
Ended Job = job_1568279387025_0012
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 4.09 sec   HDFS Read: 10859 HDFS Write: 169 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 90 msec
OK
72      1       beijing1        NC
68      1       beijing1        NC
40      1       beijing1        NC
21      1       beijing1        NC
63      1       beijing1        NC
60      1       beijing1        NC
30      1       beijing1        NC
96      1       beijing1        NC
71      1       beijing1        NC
8       1       beijing1        NC
Time taken: 763.809 seconds, Fetched: 10 row(s)
  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值